 I want to welcome you to getting started with PostGas and OpenStreetMap. My name is Lindsay Hooper. I'm one of the PostGas conference organizers and I'm so excited to be here on the line with you guys and with Ryan Lambert, owner of Rustproof Labs. A little bit on Ryan, he's been working with GIS since 2011 and he got his start in PostGas when on a quest to update a roadmap he started using PostGas, PostGas and OpenStreetMap. He's been a contributor to OpenStreetMap projects since 2015 as well. Currently he is working on a book on how to use PostGas and OpenStreetMap together and he's run, excuse me, he's run PostGas on both large and small scales and I'm going to kick it off to Ryan now so enjoy and Ryan take it away. Thank you Lindsay. Thank you Lindsay for the introduction and thank you to PostGasComp for hosting this session and making this whole thing possible. Thank you also to everyone who's joined us live today. It's great to have you here and we are also recording the session to make it available for future, for anyone who wasn't able to make it live today. I am Ryan Lambert. The best place you can find what I'm interested in is on our blog. I write about, publish something about twice a month. Most of it focuses on PostGas, PostGas related technologies and that ecosystem. I am also on Twitter if you want to follow me there. I talk about a lot of the same stuff. Ultimately I really just love technology and data. I've been involved with technology all of my life. I was lucky to have a computer in the house growing up and that really set the stage for me getting involved into a technology career later in life. I love PostGas and Maps in general which was one of the great things about finding OpenStreetMap was just the cool technology mixed in with something that I've loved all my life since the paper copies. So this is the first of what is planned to be a six-part series and this is the only one of the six that I would call non-technical and doesn't require some sort of background knowledge or the ability to be thrown into the deep end. The other five all get down into the nitty-gritty of the technology and the data and how to use them effectively together. But today we're going to keep it pretty high level. We're going to go over a high level view of the software and that's PostGas and PostGas. I'm going to go over the OpenStreetMap project that represents the data that I'm using and then I'm going to kind of show some examples of why these tools together make such a great pair in your toolkit and then to end it off I'll have a few examples that illustrate how spatial data looks when you put it into a relational world of SQL. So all of this is made in possible because the whole ecosystem that we're using here is open. The software is open source, PostGas and PostGas are open source software. OpenStreetMap is open data with the addition of open source software as well and this really opens it up to anyone and everyone who wants to get involved with spatial analysis. Before I can really jump in and explore what PostGas is, I need to set the stage a little bit with what a more traditional workflow with spatial data might look like. This is how I was introduced to GIS work and spatial work was we had really powerful cool desktop software that connected to a whole bunch of data on our file server. A file server was required for us because our data was considered sensitive and we had a policy that said you can't have it on your local drive. So we had a file server that allowed us to store all of our shape files and our attribute data and geodatabases and all of those goodies. The downside of this traditional model is that you're limited on what you can do and how you can scale and to start with the file server really is simple storage. It's nothing beyond your basic storage layer. It doesn't give you any other bonus or benefit or feature beyond storage and one of the side effects of having your data on a file server especially when working with a team is you end up with a whole lot of versions of the truth. You have a copy and paste of each shape file for specific projects. You might make a project specific change in one shape file and now you have multiple versions of the truth and this just seems to be encouraged by the use of a file server. There's also limitations to how granular you can make your security model. You in general file servers are granted permissions in wide swaths based on user groups or functional groups or permissions like that but the granular access simply is not there for row-level data and so that's one of the limitations if you work in an environment that has security compliance requirements and then in the world today the desktop software is only one of the components. We have so much on the web and if you want to put your data on the web linking your desktop software to a file server probably isn't the right route. On the desktop side of that traditional model we have some other limitations. My desktop is only so powerful. I have a quad core cpu and 8 giga ram. You might have 16 or 32 gigabytes of ram. You might have a faster processor with more cores but really what you can put inside your general desktop is limited. It's also competing with the human that is using the desktop namely me. I have 42 tabs in Chrome open at all times. I also probably have a virtual machine or two running few other soft pieces of software running and so if you're relying on your desktop to provide the oomph for the processing power it is simply just limited and some of these processes that you will run with spatial analysis and spatial data are very heavy-handed. They require a lot of power and when you when I do something like batch geocoding a large set of data I hit play watch it for a couple minutes to make sure it doesn't crash right away and then I had to go take a lunch for like two hours because it's going to take two or more hours to run some of these big processes and I can't do anything else with that machine until it completes or I risk interrupting the process and crashing it halfway. Another side effect of having all of your data on a file server and all of the processing in on your desktop software is all of that data has to go back and forth between the two. This is especially true if you are also updating data not just reading data you have to read the data off the file server your desktop software makes a change to it and it sends it back to the file server. This might or might not be a big deal depending on your environment. In my case I had my desktop was physically a quarter mile away from our data center on the same campus one building a quarter mile away with all of the ethernet and switches in between the latency was a killer to productivity for me. So having to send all of that data back and forth just to do simple processing isn't the best use of the resources of the network bandwidth. So to overcome these problems this is where we introduce post-GIS and a database server in this case post-grace. We feed our data into a single database our single source of truth we can consume all of our shape files all of our CSV files and honestly those CSV files were probably in the database to start with. You only put them to CSV files to get them into your desktop software. So you have your database that has all your data now and we can move over a lot of that processing. A lot of the stuff that the desktop software previously had to do our our database server our spatial database server can handle all of that. We can do the joins between feature layers you can do joins to attribute layers you can do analytics all right in the database and connect to it from your desktop. This allows you to you can still do spatial analysis from your desktop software nothing limiting you there but this allows you to choose which analysis you want to do on the desktop and which you want to offload to the server and really lets your desktop resources focus on on some more specific tasks. So post-GIS the official definition is up here. I like to just say that it adds superpowers to your existing relational databases so it really does when you see the plethora of of power of features and functions that are made available through post-GIS. Whoa that's a lot of cool stuff. One of the key words in the official definition up here is they call it a spatial database extender and that's a key component here because post-GIS really does sit on the shoulders of post-grace. It is an extension that takes advantage of post-grace's innate ability to be customized. That was a goal from day one with post-grace was to make it easy to to modify and customize to your needs to your workflow and post-GIS took full advantage of that and continues to reap the benefits. So post-grace is a standard relational database management system. It is open source. If you hear the words open source and you think uh oh I can't get support that is false there are plenty of organizations out there that will provide you enterprise level support for your post-grace databases. So just because you hear the words open source don't think that it's going to be unsupported and a big headache because it is not and it's a really great community both in the uh the companies that support post-grace and just the wider open source community as a whole. Being a relational database post-grace is going to take really really good care of your data. It loves your data and it's going to make sure that it comes in and it keeps it very well protected. You get all of the wonderful benefits of a relational database. You get the easy SQL querying. You get to know that your data is consistent and reliable. You get transactions. If you need to scale up and have more power so it runs faster easy enough you can scale horizontally or vertically. You can do streaming replication and set up read-only nodes at different physical locations for fast read queries. Load balancing is really the sky is the limit with what you can do with post-grace and the existing ecosystem. And all of this lives underneath the post GIS functionality. We're going to focus in a little bit more on SQL querying. This is a really big benefit. This talk here, Lucas Eater gave a talk about how Java programmers would have to program a task to get some data out versus how SQL programmers do the same thing. And he calls it a he uses the terminology fourth generation language. In most intro SQL courses we refer to that as declarative programming language because you get to say here's what I want. And you don't have to think about how it gets it. You just get to say I want this data and the database goes out and it gives it back to you. And I like to relate this to a manager and an employee kind of relationship. A manager that has a good employee they can say you know I really like you to get this task done for me and I need it fast. And if you have a good employee they can go out and they can do that task if they're well trained and they can come back and they can give you the result from the task. You as the manager do not have to necessarily ask hey how did you actually get that? And most times the manager shouldn't care how they got that. That's the way it is in SQL. You don't have to care about how you get it. I catch myself sometimes writing things in Python programming language and I start writing these weird loops and then after a few seconds wait a minute do it in SQL because SQL in in a nutshell is fast and productive. Not just on the server side and the technology side but it's fast and productive for the human side. You don't have to go through a whole lot of brain damage to think about what you want and how to get it. You just have to decide what you want and let the database take care of the rest. And this is especially true with spatial data where it can take advantage of these of the SQL syntax wonderfulness. So if you're familiar with relational databases in general but not so much with PostGrace this is a are a few things that PostGrace has that really adds some sugar to it. The hstore and JSON at the top I put that at the top because OpenStreetMap relies heavily on the hstore data type. And that allows us that's an extension another extension in PostGrace that allows us to store key value data that OpenStreetMap uses. And we use that in OpenStreetMap for historic reasons because JSON is a relatively new implementation in PostGrace. It's getting really really good with version 12 but for historic reasons OpenStreetMap still uses the hstore extension. Row-level security going back to the file server and the broad swaths of permissions that you can grant. Row-level security in PostGrace allows you to write SQL code and make that code a policy a security policy and apply that to your data. So your security policies can use the data itself to decide who can see the data. This allows you to really effectively manage who can see what such as if you had an employee's table with all their addresses. Maybe you only want the managers to be able to see their employees and not the employees of another manager. You can set up a row-level security policy that allows that kind of granularity of access. Really really cool stuff if you have a security conscious environment. Materialized views and generated columns. These are both ways that PostGrace allows us to decide to pre-compute some values and save them to disk. Materialized views have been around for a long time and it's a lot like a regular SQL view. It just saves all the data to disk. Generated columns are new in version 12 but it allows you to calculate a single row's value based on other data in that column or functions. Really cool stuff that allow you to offload the expensive portions of your spatial analysis. GIST indexes. That's what we would in this context call a spatial index and that's really what makes all the querying super fast. When you say I want all the coffee shops within five miles a GIST index can make that happen like things be. Toast is there when you have your large data come in and spatial data especially polygons can get large quite quickly. When your data size hits over 8 kilobytes per row it gets put into this oversized storage we call toast and it automatically compresses it at the same time by default. You get some storage size benefits there as well. Just some cool extra benefits that PostGrace brings to the table and because PostGIS is an extension it gets to stand on the shoulder of a giant and by putting our data in the same database with our spatial processing PostGIS we get to cut out a whole lot of network back and forth and all of the latencies that are attached to that. We have our data and our processing in one place makes it super, super efficient. If you're familiar with spatial analysis you may be wondering can PostGIS do what you need it to and the answer is probably without knowing everything that you're doing. I can't say that in advance but there's a really good chance that it can already do what you need. One of the fun things about talking about PostGIS is every time I show a specific way to do something after the fact someone will come up to me and say why did you do it this way instead of that way and my response is normally huh I didn't know that way existed let me go see if I'm doing it wrong because there's so much power and so many options inside of PostGIS it's hard to know everything and the best and the always the best method for your particular task. In PostGIS if you look at the function count that it installs it's over 1,300 functions that get added to a database when you install the extension. One point to know is that with PostGIS 3 the roster components of PostGIS have been split out so in PostGIS 3 if you install just the vector base the base vector portion you'll have a smaller number than that. Any time you put data into a system one of the natural questions to ask is how do I get it out. With PostGIS we have no shortage of tools to use I've been talking about the generic desktop software so far we have those are those are really good options they're on the heavy-handed side if you're a database person though so you might not want to be writing sequel queries and those. Luckily we have some specific tools for the sequel lovers of us. D. Beaver and P. G. Admin 4 both have introduced spatial viewers in the past year or so. The spatial viewer allows you to write and draft your sequel query your spatial sequel in an easy editor and then you can select some or all the data and visualize it in a map. Really cool and really handy for those of us that need to write the queries but also want to visualize the data at the same time. And then for the developers if you want to be if you want to write something on a web layer or in a programming language there are plenty of tools for the variety of languages and frameworks of your choice so you can probably get get in touch with the data. And of course all of those DBAs in PostGris land that love PSQL and swear by it no problem they can use it too. Again with interoperability there are plenty of formats that we can work with. If you have your spatial data in your database you can wrap a query up with the as GeoJSON and you'll get a GeoJSON dataset. If you need a well-known text representation there are multiple functions for that. Shapefiles no problem those are unfortunately command line tools I don't know of a way directly in SQL to do that but there are ways to interface with shapefiles and all of those others. And one of the coolest I've seen lately is the ability to create a full-blown tile server in PostGIS. So MVT that is stands for map box vector tile I believe and so this post by Paul Ramsey goes through the steps that you can do you can take in order to stand up a tile server in PostGIS. And then one more point I like to throw out there about PostGIS is we are not in any way restricted to earth. In this talk today in this series we're talking open street maps so that obviously puts us on earth but if you're in other fields that have spatial kinds of data you are not this PostGIS is a good option to consider. I know the European Space Agency gave a talk last March in New York about how they are using PostGIS and Green Plum. An example teaching datasets are a good example of when you might want to not bound something to earth because it's a whole lot easier to use simple integers instead of real latitude longitude values. So on to open street map so this is the data side of things as I'm seeing it. I do need to throw out there though open street map is not limited to just data. The ecosystem of open street map has its own software and interfaces and there's a whole lot going on here but for the purpose of this talk I am focusing on open street map simply as a data source. So if it just so you're aware that I that it is larger than what I am showing in this context that we are just concerned about the data for right now. So open street map is a really cool project it's been around for about 15 years started in the UK and it represents a global dataset and this is one of the reasons I love open street map as one of my core base datasets is because no matter where I want to work you know geographically on data I can at least get a starting point from open street map. I know of a number of US specific sources for this and there's European sources for that open street map is a global data source so that gives us a kind of a consistency consistent starting point that is nice to work with. It is a semi-structured dataset that goes back to the age store data type I was talking about earlier it's a they store the data in key value pairs and if you're familiar with semi-structured datasets you might be fully aware of the pain that this can cause on the reporting end of things because with the flexibility up front makes for more work later on. So the official definition of open street map is up here I can never seem to remember this whole blurb when I'm talking to people on the street so I just say it's Wikipedia for maps. That's about the most succinct way I can say it and most people understand what Wikipedia is and the important concept that anyone can edit is which makes you either happy or nervous depending on on where you stand on that topic but it's basically Wikipedia for maps people around the world create accounts and sign in and make changes and make it a better place. Open street map is used very very widely if you have seen a map that is made by anyone other than Google is most likely an open street map source for that map. These are all over the place and its popularity has continued to grow over the years and this list of who's using it continues to grow which is really encouraging to see that the community itself is growing. So open street map itself when you look at it in a browser hey it looks like a map. This map has a few differences though namely in the top right corner there's a sign up feature. Once you sign up you can log in and then you can go in and edit the data. So if you don't like what you see in the map you know it's wrong there's been construction recently that you're aware of you can edit and make it better. When we're looking at a map like this just think a little bit about how much data goes into making a map like this. How many data points does it take? Under we have city names that show up there's some forests the Rocky Mountains are off to the west there's a whole lot of information that has to be represented to draw a map like this. Every pixel that is here is here because of data in a database somewhere. That's pretty cool really at least to me but I'm a nerd like that. So open street map has a whole lot of data this is a very incomplete list of elements that you can find in open street map. These are some of the more common elements that most people will find themselves using at least once or twice in their career but there are everything down to fire hydrants trees and crosswalks in open street map. It is a unstructured semi-structured loosely regulated data set so you can put in a whole lot of random things that might not apply to everyone but they apply to your locality. When the data goes into open street map it's a it's done in a key value pair. This is an example of what a sidewalk might be keyed as. The left side of each equal sign represents the key and the right side of each equal sign represents the value. One thing to point out here is you might notice that some of the values can also be keys. In the case of footway we have highway equals footway so we're describing a kind of highway as a footway and then we can further describe that footway as a specific type. In this case it's a sidewalk and then we can have other attributes that explain what type of you know the surface paved or maybe surface equals dirt can go in there. There's a lot of information out on the wiki and if you're going to be consuming open street map if you're editing open street map you absolutely need to be here. If you're consuming open street map I highly recommend you get familiar with this wiki because it is going to help you much better understand the content of the data that you are working with. One of my favorite things about this wiki I'm not going to spend a lot of time on here is they have pictures and while a picture for a sidewalk might not seem like it's really all that necessary as you start looking through the diversity of the data and what they mean in different localities these pictures can become extremely helpful. So the wiki this is your data dictionary as far as open street map coding goes so keys when you when you take these keys and you turn them into more of a tabular data set that we're used to in in database land your keys then become your column headers so in this case we have our key highway has turned into a column header highway we can see that there's footways and residentials and other classifications of highway. Also I've included a max speed column not every row will have a max speed even if they should have a max speed they don't always get set and that's just one of the nuances that you have to understand with this data is that it's going to be partially completed in in various places. Going back to the volume of data this is a 3d map provided by this f4map.com they have this demo site this is a fully navigable 3d map using open street map data I'm not going to do a live demo of it during this because it takes a whole lot of oomph to run this in the browser and I don't want to risk that so I have this screenshot to kind of illustrate how much data is here and so this is we're looking at Coors stadium Coors field in downtown Denver this is where the Colorado Rockies play and to think about how many attributes and how many polygons and points go into making the 3d representation of a stadium is just mind boggling and then you know you look at we have flags up here on this ridge we have all these trees around there's uh street lamps in the parking lots and stuff this is all data in open street map there is a lot of information here that you can that is at your fingertips and is freely available for you to take advantage of as the number of tags grows they they don't each get their own column in the database as I've mentioned we use the hstore data type to shove all those key value pairs into a single column in this case I've ran a simple sequel query where I'm filtering for the Coors field that's the polygon we were just looking at at 3d and I'm grabbing it from a table and then I'm splitting out the keys the key value pairs that are all that are in that file or in that column and we can see down here the diversity we have ellie that's elevation 1585 again the wiki will tell you that this unit is supposed to be meters but so if you don't know what unit something is in the wiki will help you out there we have a website for where to find the information on mlbu.com we have a wikipedia name we have roof color in hexadecimal and color is a good thing to point out going back to the beginning of open street map remember it was started in the uk so all of us in american english spellers will have to relearn how to spell a few words in order to work with open street map but this here this isn't even the full list of of attributes because my screen is only that i can share is only so big there are so many attributes on in this data that you can take advantage of in any type of analysis that you may be doing so the spatial data we're talking about when we talk about open street map data is called vector data and I briefly mentioned that earlier about the split between vector data and roster data but we are dealing with vector data and in a nutshell that's points lines polygons and then the multiversions of each of those but that's what says this is a point this is a polygon it's next to that line that's how the the spatial information but then as we've seen we have this amazing wealth of semi-structured pairs of data so we have so much information that we can work with here you can use open street map for a whole lot of things as was mentioned I got started with I just wanted a roads layer the roads layer we had was horribly out of date it was horribly structured I just wanted kind of a clean slate for for some base maps and that was where I discovered open street map and ultimately how I discovered post grace and post gis before that I had worked in other databases but this was my first exposure to post gis was when I wanted to create that roads layer the 3d rendering is really really cool that that a four map site I reckon if you have some time to kill just go to somewhere you're interested in and zoom around and see what kind of 3d map you have in that area I'm hoping to see more and more development going into that that side of things and then analysis is really an open-ended box what do you what do you mean by analysis well it could be a whole lot of things so there there are so many things that this data set can be used for the your imagination is really the limit now when you want to get your hands on open street map data to use it there are a number of ways and the osm wiki has a whole page dedicated to this you can get the whole planet as one big file the compressed version of that in pbf format is over 40 gigabytes and it grows enormously when you decompress it and load it on the the open street map page in the browser that I showed earlier there is an export button that you can click for that you can get will allow you to grab small areas of data at a time there's an api that you can interact with to get to get data out and then there are other extract services such as geofabric that they provide regional extracts and that's actually the one that I have used the most I normally want larger areas of data so the browser export really isn't an option and in all in all reality I kind of just want to colorado to start with or I want all of the us I want a big region normally and so that's why I go for these regional extracts geofabrics service is really awesome they had they update their data sets basically once a day they have regional exports for the entire world by large and small region alike and pbf format is the most common that we'll be working with their download page looks about like this this is a small portion of their download page you know take over to the the main site and so we have our open street data extract page here the listing is down below with multiple formats so some files have more than one format depending on what they've decided to do I'm not sure how they make that decision what I do know is every region has a pbf file and that's why I've focused on using the pbf exclusively is because I know that I wherever I want it for I can get it north america the pbf is 8.8 gigabytes europe is 20.6 gigabytes these are not tiny files by any any stretch of the imagination if I click on north america and before I click on it notice that the map on the right hand side will overlay with a red color where you have selected so you know kind of what region that you're getting when you download something if I click into north america there's a few helpful bits here first is up at the top it'll tell you when it last was updated so I can see now that this was updated 19 hours ago if I was very concerned about getting the most recent data and I could wait five or six hours I would reckon I would wait five or six hours to get the next update if that was my my main goal scrolling down on the north america page we get all of the additional subregions within north america so if I wanted to download all of canada I can get all of canada without having to load the rest of the us and these are roughly split by state for the us and then down at the bottom there are some other special regions that allow you to they combine a handful of states together based on larger geographic regions and so depending on where you what region of the world you're looking at these regions and subregions will reflect those those areas so those those downloads I'm coming back to download size looking at north america this screenshot actually shows 8.7 gigabytes so it's grown 0.1 gig since I took this shot a couple weeks ago but the pbf size pbs are a highly compressed format there's a whole lot of information on the wiki about the format itself my eyes kind of glazed over as I got into it so feel free to dig in as much as you care to but the key detail here is the compression the compression ratio of pbf is 30 to 50 smaller final size than your other comparable gzip or other compression formats so that 8.8 gigabytes for um for the north america will be about 50 larger if you download another format of compression size and then once you decompress it and load it into post gis the data size is going to grow exponentially because of how post gis stores the data versus how the pbf format works so just something to be aware of on those file sizes that those are really really highly compressed files so we have once we've decided on a data source and we have our data in the downloaded we need to get it into our post gis database and this is done with a tool called osm to pgsql this is another tool that to the best of my knowledge is only offered in a command line option so you'll just have to run with command line and learn the switches the details behind this slide make up the entire content of the next session in this series loading data from the pbf format to post gis is quite a it's not that difficult but it takes a there's a bunch of steps you just have to get right and it's a slow enough process that when you get it wrong it takes a while for you to get those errors so you can restart if you are using osm to pgsql please it is in your best interest to use the latest and greatest version the ubuntu packaged versions are unfortunately quite a bit out of date and this project has been quite a bit quite active in the last year 1.0 was released only a few months ago and we're already up to 1.2 this latest version fixed a memory a memory bug that required a whole lot of RAM to load a little tiny bit of data so this you definitely want to be using the latest version of osm to pgsql and in a nutshell the the load process starts with your pbf file it's represented as this planet dump and osm to pgsql decompresses it parses it out and loads all the data into your post gis tables as necessary so osm to pgsql really is the gateway between your source osm data and your target post gis server and as i mentioned earlier this is a kind of a long running process this is from a blog post i'm not going to go into all the detail about the individual rigs or what this means beyond rig d the orange line was a digital ocean droplet one of the their new memory optimized droplets with 16 cores and 128 gigabytes of RAM with that in mind the north america load which is almost nine gigabytes takes well over an hour getting close to an hour and a half uh that's the orange line in the middle the far right orange line loading europe you can see that even that pretty powerful server takes almost five hours to load europe in full and that's just getting the data in it's really still kind of raw not fully ready to go yet that's just getting the data in so it is a slow running process it is a lot of data so be aware with of that so now why why use these technologies in this data i kind of feel like the whole talk so far has been covering the why but that's my nerdy technical side i'm seeing it so a few more reasons i think the most important and this is true with pretty much all open source projects is you can make it better and this is a really big deal i a while ago found a bug in and how post post grace handled xml data and it ultimately there was a long long known workaround to the side effect of this bug but because there was a workaround it really wasn't considered a critical bug to fix i got tired of of this workaround one day and i decided to dig in to see what i could do to make this better and it turns out i found a bug i reported it and six months later it was fixed in production if you have the latest and greatest version of post grace you never have to do this work around ever again i had a very tiny hand in helping fix that but you know it's what it's just really cool to be able to submit a bug and six months later it's in production it's fixed on the open street map side of things if you don't like the data you can make it better this is something i do on a regular basis i've i've made a goal this year to contribute more to open street map and other than the beautiful months of summer i did a pretty good job turns out i like to go play in summer and i i don't do very much mapping them but you can make contributions based on your local knowledge to make your data sets better a good visual representation of this is this is an animation i developed after hosting a mapping party in here in golden a few years ago the animation shows the open street map data sets over time and how the data filled in and you can go from a really blank slate and in a relatively short amount of time and less than a dozen people we filled in the bulk of the the core information around our local community so this is a visual representation of how you can contribute and complete make your data sets more complete another good reason it's the price is right there is no um entry fee to get into this club you you can just start playing today you don't have to pay out shell out big bucks and sign contracts and do a whole lot of stuff to get to use it it's just there and available this is a big deal especially for small organizations small businesses local governments etc and because of the price being so low there are only a handful of strings attached the only real one that i can think of is that if you produce results with open street map give credit where credit is due give credit to the the data set that made it possible to to produce the final results this isn't that hard to do and you can put it on your maps fairly unobtrusively and most people expect to see a copyright somewhere anyway another example of it's easy to use is the post grace license the post grace license is simple enough i can fit it here on a screen during the slideshow the simplicity of a of a con of a license might not be a seem like such a big deal but you don't need a team of lawyers to do an audit every now and then to see if you're in compliance because i can pretty much guarantee you are it's easy to use there's little friction these are all good reasons to use the systems another reason to use it and this is on a more personal note for me is you can use it for anything you want at any time and it's there and it's ready this blog post outlines a fire that happened very close to our house not too long ago here in golden on south table mountain and so the aftermath looked like this this was about three days after the fire had happened but one day my wife and i were both driving home at the same time and we both saw the same plume of smoke coming over the top of the mountain that we live right next to and we realized we've never talked about evacuation plans kind of a good deal this is also the moment we realized we have two cats but only one cat carrier so all this is going on and we're having these conversations we're watching the local news helicopters fly overhead us and on our tv we have the local news playing and i start making notes of where they're what what i'm seeing from the helicopter feed and before the the night was over i had a map of what the areas that had most likely burned and in the end result was i have a map of what actually did burn i went up and surveyed the whole area but having this open street map data i had all these data points that represent landmarks that i can i could correlate between what i was seeing in the aerial photography the aerial video versus what i know on the ground and what i can see an open street map and if open street map wasn't there and post gis didn't exist this is the kind of project that i could never have done and i would have never known exactly how far how close the fire got to our house it was pretty it was just over a mile so that was a really cool project and also to illustrate on the northwest of this map this is the coors brewery that's pretty valuable piece of property as far as golden is concerned we also have a country club over here that this fire was getting close to the wind had come up from the west and was pushing it up on the ridge and was getting ready to push it down the ridge into the country club and had the wind switch to come from the north this is all residences and mixed government commerce all of that so there's fire in the middle of a heavy heavily populated area i was able to quickly kind of see what is nearby what's impacted and where are we in relation to that another big benefit i love this one this is one of the my best selling points of post grace in general is you can run it on anything whatever hardware you already have whatever os you're already running you can probably run post grace on it if you have heavy powerful iron servers that you want to put it on cool it'll be fine you want to run it on a raspberry pi that's also cool it'll run fine you just have to have a little bit more patience because it's not going to be quite as fast but it will run so into some examples here we're going to look at so how spatial data looks in a relational world pretty standard simple sequel query we have the first line is says we want these columns the second line says get it from this table and the third line is our filter that says we only want to look at two boundaries we want to look at golden and we want to look at denver fairly simple this is a straight up you know standard relational query i wrapped it up here in a little bit of python code so i could execute it and show you the results i can run it and lo and behold i get a little table tabular results really nothing interesting here but it's just to illustrate the query now to take this another step we want to make it spatial enable we can do this i've added a single column into my sequel query called way this is the column that my geometry data is stored in my spatial data and by using adding this way data into my result set i can now use the geopandas tool and pull in pull this data in as a geodata frame from post gis and then i can plot it on a map so if i run this it'll run pretty quickly and right away i get a little square with a couple polygons the one on the left the smallest polygon on the left is golden the larger one on the right is the city and county of denver so this is not a very exciting spatial query but it is a spatial query nonetheless now this is where things really get interesting this is where things get fun spatial joins are one of the key components of spatial sequel just like any regular joins are a key component of any relational database the ability to join between tables is pretty critical this allows us to do those joins based on where they are in space in this case i'm now pulling from two tables i have my boundary polygon still that has my city i have aliased it as city we also and we're joining to a layer called natural point and this has some natural elements in the world around us and i'm i'm putting in there that i want to find just the trees more critically is the st contains this is the part that is post gis this is what allows us to do the spatial join we're looking for n dot way that's our trees layer we're looking for trees that are inside our city so with this again i'm wrapping up in a python so i can run it i can run this and now we get a series of points back this represents all the trees that are mapped in the city of golden if i want to change this and instead of maybe i don't want gold and maybe i want lakewood i can change that query is simple filter i get a completely different set of trees back based on where they are in space and this is the basic setup of what makes spatial query so awesome and just uh for fun to prove that uh this does run very nicely on a raspberry pi this is uh what i've been running my queries against is post grace running on a raspberry pi with post gis 2.5 so even for live demo for small data sets raspberry pi is sufficient to run and able to run spatial sequel for us all right so that takes us to the end of my slides so thank you everyone for attending please get in touch with me after the fact if you have questions i will try to get these slides posted i've meant to have that ready before today but when i saved the pdf uh for some reason the images were not coming through so i'll get that resolved and get the slides posted here in the next few days hopefully as well as the recording of the video but thank you for joining us everyone and we'll go ahead and close things down have a wonderful day