 All right. Good morning, everybody. It's well, hopefully it's morning for everyone, whoever they are. I am Inga Fabriz-Rotelia. I'm from South Africa. And I'm going to be the chair for this excellent session. I think we're going to have today on spatial R. Spatial analysis, spatial R. I'm thinking about the plenary yesterday. So we have a really good lineup and I hope the session is gonna be excellent. So this is spatial analysis. The sponsor of today, just pointing it out there, is RStudio for today. For the participants that are listening in, please while the presenters are presenting their talks, you're welcome to place some questions in the Q&A and then if you can also vote the questions and they will filter up to the top as the ones that most want to be answered. And then we'll have some time after each speaker to answer some questions. So our first speaker for today is Lukas van der Meer. He's from the University of Salzburg in Austria. Lukas, you can share your screen so long when I introduce you. He's got a geoinformatics background. Thanks, Lukas. Right, and he's going to talk to us today about SF Networks, which are tidy geospatial networks in R. And then he's got some co-authors, I'm sure he will also there present to you. Thank you, Lukas, over to you. Sorry, don't worry. That was not so smart. What I was saying is, yes, I do have a lot of co-authors as you see here. I think it happens that two of them are also in this room because they will also talk later in this session. Robin Lovelace and Andrea Gilardi. And then the third co-author is Lorena Pat, who is Pat here, but yeah, that is our team that created this package called SF Networks, which is about tidy geospatial networks in R. And what I'm going to do now to give an idea of what the package does, that I'm going to go step by step through this title of the talk so that we can really see, okay, what does this mean? So tidy geospatial networks in R. And the first part I'm going to highlight is geospatial networks, what are they? And so geospatial networks are networks that have nodes and they have edges, like all networks, yeah, but in geospatial networks, these nodes and edges they have on low, they have a location in geographical space. And so a node is somewhere in geographical space and an edge going in between two nodes is also somewhere in geographical space. And so these are, for example, road networks, river networks, but you also can take it a bit more to the abstract level of geolocated social networks or even a very hot topic now at the demological networks, as I said at work, good. So there are a lot of types of spatial networks, geospatial networks that are actually used in real life. Normally we will model the nodes as being points, and the edges as being lines that go in between these points. And in contradiction to regular networks, but let's say, the topology office, of this network, so basically you have a node and this node connects to this other node and to this other node, yeah. Only that information itself is not enough to describe the whole geospatial network because besides this, you also have the spatial information in there, which is an explicit part of your network. And when you analyze geospatial networks, that in its, in most cases, very important to explicitly take the spatial information also into account. So that is why it's needed to have the specific tools to analyze geospatial networks compared to regular non-geospatial networks. So the second part I'm gonna talk about then is tidy in arc. I think most of you probably know what the tidy verse is. Yeah, it's the collection of art packages that were made for data science and they all share the same philosophy and the same structures. So this is this tidy data principle in which each row is an observation and each column is a very, as I have in a small example, a very small table there, each row is an observation. In this case, it's a street and it has a variable name and a variable type to say what type of street it is. Then when we move this on to say geospatial in arc, yeah, we get in our case, like in our kind of data to the SF package. And SF package is a very well-known, now a package that's used for spatial data science and it interacts very good with the tidy verse. And what the SF package does is basically adds to each observation has the geometry in space. Now, so we have this geometry column here and that same case for each observation has its own geometry, has its own location in space. And then the package contains a lot of geometric functions or you can work with this data and possibilities to transform to other coordinate systems and all these kind of functions are all bundled in this SF package made for spatial data science of spatial vector data. So that are points, lines, polygons and these kind of features. And as I said, it interacts very well with the tidy verse. And then an example, for example, we have a set of lines and we can transform this into different coordinate reference systems. We can, for example, also create our own geometry top of that, which then we can use in a geometric operation where, for example, say we want to only keep those lines that are intersecting with this additional polygon that we make and we end up with something like this. Yeah, so this is something that SF, for example, can do. Then networks in R. That's where I want to introduce the package tidy graph in which is a tidy interface to the larger eye graph library. And eye graph is a very large library with a lot of all the different implementations meant for graph theory. So networks, but not focused on spatial networks. Yeah, and what tidy graph does is that it says, okay, eye graph has a lot of tools but it doesn't fit really good into the tidy verse way of working. So they created this package, it's everything. This way we bridge basically between eye graph which has all the algorithms and the tidy verse way of working. And because the network in itself is not really can be modeled as tidy data but what tidy graph says, well, at least we can have two different components of the networks and the nodes and the edges. Those we can model themselves as being tidy data breaks. And then two together, they form one object that then represents the net. So that's what you see here, right? We have one network, but with one table which represents the nodes, in this case the nodes that name and one table that represents the edges. And that tells us also, okay, from which nodes to which node does this edge actually go. And of course, tidy graph is then also in that way very good to work with the other tidy first set of packages. An example, you can use all the tidy first verbs that you might know, like mutate and all these other verbs directly on your network. Yeah, you just first have to specify do I want to apply this action to the nodes or to my edges. And that is how you can use this normal tidy verse verbs on your network. What comes on top of that is that they, of course, you can use all the eye graph algorithms that are specifically meant for graph theory. Small example is, for example, calculated the centrality. So that's what you do here. You calculate the degrees and terms of the nodes and you calculate the between the central for the edges. You can do that with your tidy verse verbs and using the pipe structure that you probably also know, yeah? Of course, that means that you can also just add geometry column to your network where you say, okay, my notes actually have a location states what I do here. The problem here is that tidy graph doesn't know about what to do with this. It just sees it as another attribute of your notes but it does not know how to deal actually with the spatial information. So for example, the notes that I've just added here and they are in space, they look like this. But when I plot my network, tidy graph doesn't know that and it cannot deal with that. So there is a need to explicitly incorporate space into these networks to really give a good tool to analyze geospatial networks. And here's just a tweet that said, okay, one of the biggest reasons that people still use RGS is actually because there is not a really good general purpose network tool that we have in POSFERG and also in R. Yeah, so then our idea was, okay, we want to combine as that and tidy graph to get one package for spatial networks which is called asset networks. And the idea here is that in tidy graph, they say as I said, a close approximation of tidiness or relation or network data is I think two tidy data frames, one describing the node data and one describing the edge data. And then we said, okay, we extend this to the geospatial field bringing asset in there at the close approximation of tidiness for geospatial networks is a collection of two asset functions, one having point geometries for the nodes and the other having line string geometries for the edges. And that looks like this. Yes, so we can start with just a set of lines and we can create a network out of that. And if we at least end all the lines, it will create a node. And when these ends of the lines are shared, there will be one node there which connects basically the edges to itself. Yes, so directly, we can jump from only lines to a network representation. Then we can do again, oh, this is coming closer, we start with only lines and we create a network out of the other looks like this. We have nodes with a point geometries and edges that my student used. Then we can, for example, extract nodes or edges again, as an asset object at the root direction with an asset. Also, we can extract the geometries we can use the asset functions to transform our network into a different CCCC. This all works out of the box on this geospatial network structures as a network cloud. Also, when we want to do this filter that I showed before, here we have our network, we have this polygon there we want to say, we only want to keep that part of our network that intersects with this red polygon there. And then we get this. So all this asset functions work directly out of the box on your network object, the same is for the tidy graph functions. And so here we have our network again, we can do the same as I showed before, we can calculate, for example, the betweenness and quality of the nodes. And this will be added here as a column to the nodes. Here, down here. So we can use this tidy graph works also directly on our network object. So this brings together the functionality as an analog tidy graph, which has a graph in the back, together in one class that is meant for geospatial networks. And then, for example, we map it now, now the object actually knows what to do with the space. So space is explicitly taken into account in these objects. So that is the core. The core is that we have one class that can be used both with asset functions and with tidy graph functions. These two worlds are basically brought together into one class. Of course, there are some extra things that are not covered in tidy graph, neither in asset. And in that way, we add some extra functions that are really specific for spatial networks we added on top of these functionalities from our two parent packages. And for this, I can go really fast. It's just showing some plots. For example, we have a lot of pre-processing between functions. Here we have a very dirty network. You see loops here, which can be easily taken out to have a more simple network. We can also subdivide edges at points where actually an edge crosses each other, but there is no nodes. We can remove two nodes with these nodes that only have an edge at both sides, but don't really form an intersection. We can simplify intersection such as here. And we can also snap points to our network. For example, snap in geostatial points to our nearest node, which then works like this. We can also snap points directly to the nearest edge and include them as a new node in a geospatial network, which looks like this. And so all these additional functions are not covered. It's not more than that. There are specific geospatial network functions that we have on top. Shortest part of the alkalis is, of course, very, very important. And as a network, you can also give points that are not already a node in the network. Yes, we can give any point. It will find the nearest node and then calculate the shortest part. So this is also functionalities that we added on top of what is already there. And cost matrices between all pairs of nodes is also in there. And a lot more, which I cannot cover all now because I think I'm already out of my time. But I think that what I can say is we're very proud of that we have already kindly the good documentation where we cover everything that the package does now. You can install it, of course, from Chrome also now. And mainly I want to ask, look at the docs if you're interested. But also if you have ideas or if you want to contribute, please go to GitHub, write an issue, write this question topic. Because the package is still rendering you. It already can do a lot of stuff, but also a lot of stuff it cannot do yet. It might be sometimes bugging. So we are really happy if you want to join in and make this package better. So that was it. Thanks a lot. Thank you, wonderful. I see there is a question. We'll do just two questions, I think, and then we move on to the next presentation. Kenneth asks, how can you use SF networks in a hierarchical network, like a river drainage network system? So this is actually a point that this is not really my field. So I cannot answer really that question. But I think the real goal that we have for this package is that it's a general purpose package. So I'm not focused on one specific application. And it also means that the package is probably useful for a lot of applications that I don't know about. So sorry, I cannot specifically answer your question, but I do think that since it's a general purpose package, that there might be a lot of possibilities to also use it for your application and if it cannot do everything that you need to use it as a base or a new package that then specifies for your specific application but uses this data structure as a base. And that is basically the goal that we have with the package to create something general purpose that can be used as a base for also the more specified packages that go into one specific application. Thank you. There's one more question. I just want to ask Robin to start sharing so long while you answered. Martijn asks, how was the development of the SF Network package initiated? What triggered the team to start working on it? Well actually it was started and I did during my master studies, I had an R course and we had to do an homework where we had to create an R package. That was actually the first time where I made something very small and then came in contact with Robin Lovelace where he liked the package and said we should push this on, we should really make something bigger of this. And along the way other people joined in and so it started as a small homework and project and slowly got to something more that is now actually already used by I think quite a lot of people and it's thanks to very happy. Very, yeah personally I'm going to make use of it, it looks really excellent. Thank you Lukas, there's some time right at the end of the session if people want to ask Lukas more questions so please put them into the Q&A and we'll hold them for there but just to keep the time correct we're going to move on to Robin and Rose's presentation. Just to check, can you hear me okay? I can hear you Robin, where are you? Fantastic, because my video is not currently working because I've just been doing an online tutorial and if I stop that I think the whole session will end so you can see the screen so that's the main thing. That's the main thing, I can hear you. Is Rosa online as well, just to check? Is there, yes we can hear you Rosa. I cannot share my video with you. Because you're not a co-host. Yeah I'm happy to do the slides. Rosa I'll try to make you co-host in the meantime so that when you start talking you can turn your camera on. I'll work on that while Robin is speaking. So it's my pleasure to introduce Robin Lovelace and Rosa Felix, they are going to talk to us about slopes a package for calculated slopes. Very well, good to have your name. Robin is an associate professor of transport data science at the University of Leeds. I think we all, if you're in spatial, you know who Robin is. So we're really looking forward to your presentation. Thanks Robin. Thank you so much for the intro and yeah we'll be talking about slopes. It's a nice short package name and kind of does what it says on the tin. It says calculate slopes. That's a surprisingly hard thing to do. We assume that this data would be out there on slopes and kind of taken for granted. When you're walking up a hill you can say this is a steep road. But actually there doesn't seem to be that much quantitative accurate data on that. So yeah, that's what we mean by slopes. How steep are roads primarily? Which is what we've been looking at. But we also think and hope it can be used for other areas of research. So rivers and I'd be interested to hear of other potential use cases. We think the methods are generalizable and we're just going to talk about what we've done. So yeah, you've introduced me. I'm Robin and Rosa. This is a collaborative project. So that's why it's a dual presentation. And Rosa, do you want to just introduce yourself? Sure. So I'm Rosa. I'm based in Lisbon, Portugal. I work with the University of Lisbon in active transportation. And my background is civil engineering and geography somehow. Yeah. Great. So yeah, we've got quite a detailed presentation on the package. There's a lot of things to consider in relation to slopes. So why slopes in the first place? Some of the key functions. So you can actually start using it and future plans. So nice, simple structure to our presentation. And yeah, the first thing to say is this map kind of highlights each of our motivation for creating this package. The map was actually created by Rosa and it provides estimates of the road gradients across a large road network in Sao Paulo. And this is something that people are interested in. So on social media, there was plenty of people talking about this and using this to talk about issues with transport provisions. So yeah, we'll come on to why people find this so interesting. And it also shows that although we're presenting the methods, you can use this now. So if you've got a city and you want to calculate or any other system and you want to calculate the gradient, you can use this package right now to do that. So yeah, just going into a bit more detail on that. So we do want to solve real world problems. This isn't a purely for fun, although it is quite a fun package. And existing tools just weren't up to the job. So yeah, Rosa and I have been collaborating on tools for estimating cycling potential. And this is especially important in cities like Lisbon, where Rosa's based and also Sheffield, where I have lived in the UK. And the importance of gradient from cycling perspective in particular is highlighted by this graph on the right, which is from a 2017 paper on the propensity cycle tool. And this just shows that on average, so the blue line is average percentage of commute trips made by cycling. And on the Y axis, you've got the average route gradient. And you can see that gradient is almost as important as distance. And so when you get almost a 50% decline in cycling, just as you go from zero to 1% average gradient. So just emphasize this is average gradient, not the steepest segment in a route. But in many cycle models, cycling models, this is kind of ignored. And many transport models, they don't take into account routes gradient. And even when they do, and this is a critique of the paper that this is taken from, sometimes they only take simplistic measures of hilliness, like the average gradient. But in fact, if you think about walking or cycling, it's not the average gradient, it's the steepest section. So the point there is you could have a route in which the average gradient is 1%, and it's just a constant 1% slope, which you may not even notice. Or you might have a route that is completely flat, except for the last part, which could have a gradient of 10%. So that's a kind of real world need. Rosa has been using Esri's 3D analysts. So that gives you estimates of slopes. But it's not easy to reproduce. Most people don't have an Esri license, and it's hard to scale this up to national level analysis, especially if you're using their cloud-based providers, where you have to pay per use. So we wanted a free and open source solution, and we had a look. And we simply couldn't find a solution that met our needs. So this sounds like an ideal R programming challenge. We both love geospatial data. And we decided to look at this and try and provide something that could do it. So in terms of applications, transport planning can make a lot of use of slope data. Obviously, there are many attributes on road segments. What kind of road is it? How wide is the road? But slope can inform many aspects of transport planning, especially active travel planning, but also logistics. So if you're moving hundreds of heavy goods, lorries around, you need to think about slope in terms of the efficiency and the route planning. And then also in terms of river and flooding research and civil engineering, when you're thinking about the probability of a building shifting or moving. So although we haven't used the package for these other applications, they do exist. And we suspect that there are other areas where this could be useful. And there's also a more prosaic and fun use of slopes, which is illustrated by this graphic in the center, which shows the relationship between this is walking speed on the y-axis and then gradient. So this is basically mountaineering. If you are planning to route in mountainous areas, this is the kind of thing that you need to consider. And it shows that you have quite an interesting relationship between a gradient. So basically, when your gradient is zero, you're close to your fastest speed. But if the gradient drops to maybe minus 1%, you're slightly faster because you're going downhill. But when you start going really steep, again, you slow down because it's too steep to walk normally. You have to go carefully. And obviously the same applies when you go to higher gradients. So that's just another use that if anyone's into hiking, perhaps you could use the package to help plan your route and provide a more accurate estimate of the time that it's going to take, taking into account the route. OK, so getting on to the details of the package. It's not currently on CRAN. We definitely plan to publish this on CRAN. But currently, you can install it from GitHub. So this exists on the ITS Leads GitHub organization. And you can install it with this command and then load it by the usual way, library slopes. And then for the purposes of this presentation, we're also going to be using the TMAP package for data visualization. And the package has got some key functions. So slope XYZ, I really like this function because it kind of adds on the elevation data silently. And it takes advantage of a new feature of the SF package, which is you can have three-dimensional coordinate systems. And I'll show you what we mean by that. Slope raster, that calculates the slope based on a line string. Like you've got an example here from Lisbon and a raster data set. And again, I'll show in more detail what we mean by that. Slope 3D, so that adds the third dimension to line string coordinates. And then plot slope plots a horizontal profile associated with a line string. And we're going to provide examples of each of these. But just to show you the kind of thing that we're talking about very quickly, this first command takes a data object, Lisbon root 3D. It breaks it up into different segments. And then it calculates the slope. So slope XYZ here is calculating the slope. And then we can see what the slope is. And when we generated this plot, we thought of another use case, which is people who have limited mobility who may find it difficult to travel. So moving rapidly on, the two main inputs are line strings. So this is like a root network. And then on the right, you have a map that is a raster image where you've got pixels with average height. And combine the two together. And with this slope 3D, it's basically just adding the Z dimension. So this is XY. And then after applying this command, you've got XYZ. And you can plot that using the plot roots. On the left, you've got the before, which has no X axis. And then on the right, you've got this really rich and valuable information about the gradient at different parts along the roots. You can also calculate slopes when you don't have a digital elevation model. And this uses the map box elevation profile data. And we've generated an example for the K Study City of Zurich, where we probably would be if it wasn't for the pandemic. And that just shows that you can use this to calculate roots in other cities where you don't have a digital elevation model. So I'm going to hand over to Rosa now. He's going to talk about using this on a bigger K Study. So over to you Rosa. Okay. So, yeah, I'm going to briefly describe how to use this for larger datasets. Sometimes we have some limitations with the API keys. So if you want to calculate slopes, you can also calculate slopes for a large root network. This can be very useful because it uses a local raster that you can freely download for many services, such as SRTM, NASA mission or Copernicus for Europe. Well, or other local agencies that sometimes have more detailed raster available. So here we used for this example of the Isle of Wight, we use the OSEAN extract that you can later see the talk of Andrea and the T-Map to make it just a thematic map. So we just download the network from the OpenStreetMap and then we made use of a data set of a raster datasets from a SRTM. And then we just plot them together just to see if they overlap and if everything is okay. And as you can see from the image, you might expect to see some stiffness on the south area of the Isle and in central area too. Yeah, there it is. So then we will proceed to calculate slopes for each segment. So this is a fast procedure for this road to the network that has about 23,000 segments. It took about six to seven seconds. So it's about 3.400 segments per second which is kind of fast. And just for curiosity, as you can see with the summary of the calculated slopes, you have like the mean gradient is about four percent and the half of the segments has about three percent. Which is, I mean, if you think about cycling, three four percent, it's already a little bit steep. And if you look at the maximum, that's more than 100 percent. It's 152 percent. That's very, very steep. And of course it's probably some stairs, some cliffs. You then can see some cliffs. You then can look at it with more detail. And maybe Robin also can talk about that issue that we sometimes face when we don't have a proper base map for elevation information or even the segments that are pretty small. Sometimes they can induce these different and very steep segments. Okay, so here is just a quick automatic map, which is this classification for steepness. And in this case, it's for cycling. So we consider three percent flat, three to five percent miles. And then we have the extreme and kind of impossible road segments. So Robin is just pointing to some example. Okay, so there you go. This is interactive base map. And if you look at our vignettes at Slope's package, there's step-by-step guides to produce these automatic maps for any given city as long as you have open street maps undiscovered by a digital elevation model. Yeah, so yeah, so if you don't have any digital elevation model in your machine, you can play with the existing package data. So there's the Lisbon rule segments, which is a small portion of central Lisbon, which is has very different regions of the streets. You have a flat downtown, very steep hills, eastern S. So this is one small road segments example. And you also have the DEM Lisbon raster, which is the digital elevation information for this area that this comes with the package and you can play and try yourself by using this this functions means. So our future plans, we are almost in the end. One of the functions that we want to develop is to get the digital elevation model that directly from the slopes package. Another option that we would like to have is not using only mapbox service, but also other services such as cycling streets or even Google terrain or other available services using APIs, of course. Also to improve plot slope function for visualization of the segments maybe in 3D or even improve the palette color or other aspects of this visualization. And then more in the research area we want to make sure the accuracy that these results give to us with the ground truth to compare them to make some benchmark with other available softwares that also give us slope results. We also need to this is already submitted to peer review in our open site. Thank you, Robin. Thank you, Robin. Thank you, Robin. Thank you very much for the review and then try to publish in the journal of open source software and of course on CRAN. So if you want to get involved and contribute for these future plans or other ideas that you have, you can go to our website. Robin, do you want to add something? No, yeah, that's basically the plan. The only other thing that I was thinking of saying is just the links between this package and other packages on this spatial session. So Luke mentioned the, was talking about the SF Networks package. So perhaps we could use data from the Soaps package in the waiting profile. And all of the data that we've used is downloaded using the OSMX drive. So it fits into this ecosystem of spatial packages. So that's the only thing that I wanted to say, but yeah, I think that's everything unless anyone has any questions on the package. Thank you, Robin. Thank you, Rosa. We're a little bit tight for time. So I'm going to let the other presenters present and then hopefully at the end, because there are some questions for you in the Q&A and then we can open those again at the end. Just in the interest of time. But thank you, that was really excellent. Our next speaker is Timothee Gerard. He's from the French National Center for Scientific Research. I'm sure I said it completely wrong and I apologize. No, no, it's okay. You're welcome to share your screens. Your screen. The panelists have shared some links in the chat to their packages as well as their slides. Just a message for the attendees. Okay, Timothee, off you go. Okay. Hello, everyone. So today I will present this new package, MapSF, which is a package for thematic mapping. Thematic mapping, how do you put statistical data on a map? Just like statistical mapping or thematic maps. The idea of MapSF is to obtain maps that are publication-ready with all the features, all the map features that are compulsory for scientific maps. All the legends and the maps, the map layout, et cetera. So this is a first example of a map that has been fully done with MapSF. I'll start with a bit of technical information. MapSF is built on a small number of well-known dependencies. I mean, it depends on SF, heavily. All the computation within the package are done with SF. And some of them also use class int to break continuous data into places and two or four functions use RCPP. But it was intentional. I mean, I tried to keep the number of dependencies pretty low. So the MapSF package is based mainly on one function, which is MFMap. And with this function, you can plot all sorts of statistical maps. It has three main arguments. The first is X and it takes an SF object, a point for points, lines, or polygons. The second argument is bar, which is used to indicate the name of a variable in your SF data frame. And the third argument is map type, which indicates what kind of maps you want to use. And there is nine map types that you can create with MapSF and its function MFMap. You can create a base map. It means without statistical data, it's a wrapper around the plot ST geometry. And you can plot proportional symbols, typology maps to plot categories, co-operate maps. You can also use graduated symbols. And for all these map types, you can use points, polygons, or lines. And there is also four other types to plot symbols or to combine to map representation. For example, pop co-op, you can plot proportional symbols that are colored according to a second variable, like in a co-operate map. Apologies, Timothy. Is it just a request that you try to make your presentation full screen? Yes. If that's possible? Yes, of course. Thank you. That's wonderful. Yes, that's much better. Thank you. Okay. Sorry. So I tried to keep the documentation of the package quite tidy. All the arguments are well documented, I hope. And since there is only one main function, it's a compact documentation yet complete. Besides this main function, MFMap, there is also a few functions that else to add map layout elements, like the title. You can place it on the right, on the left, choose the color, and so on. You can add north arrows, credits, bar scale. You can add a shadow labels on the annotations. Okay. Now I will try to, I have created a small example that use some features of MapSF. So the first line here, I use a function MFGetMTQ, which is used to just to load the sample data set on Martinique Municipalities. And then I want to create a co-operate map. So I indicate the SF object for the X argument. The variable I need to plot, which is made, it's the median income. And the map type, which is co-op or co-oplet. So it's pretty easy and quick to plot a simple map just to see the spatial organization, but very quickly. And then I can add some arguments to select the color palettes, the methods to make the continuous variable in classes, the number of breaks, and some arguments for the legend, the title, the number of digits, and its position. The next step is to add some layout elements, the title, the credits, and the scale, and another row. And then there is also another function that can be used to create, to select a map theme. So it's this line here. And I use an SF data set, so the MTQ, to center the map on this element. And I select a theme by its name, which will modify the foreground color and the background color, the position of the title, is it on a tab or some, and some default color for the legends. Then in the next operation here, I add a shadow on the map. So you see a little dark shadow, which is aesthetic. And then here, I have just added an inset of a world map with the function MF inset. We can be set on and off. Now I indicate the keyword world map and its position. And then with the function MF world map, it automatically creates a world map with a symbol on the position of the SF object. Then I close the inset. I will go back to this function later. And on this map, I see that the island is on the center of the map, of the figure, let's say. And I want to put it a bit on the left to make it more balanced visually. So here on the MF init function that starts and initiate the function, I can use the argument expand BB to add a little margin on the right. So with this block, with this code here, I can create this map. So pipes, we can use pipes with map SF because the MF map function output is the, this function will return invisibly the X argument. So if I use MTQ as input here, it will go within MF map and then within MF map for the proportional symbols. And it works with the native pipe as well as with the migrator pipe. So I already talked about the inset. But of course, you don't have to, you can put something else. You know, it's not only the world map. You can select an element in the map, which here I take only the first municipality. And I say I wanted to have a sex of dot three, which means one third of the width of the figure. And for the teams, I added some teams like a dozen or so. And what's changed on the team is the background, the foreground, the position of the, of the title. It can be inside the, the map, like in the dark, you let them are outside, like in the ink or green 10. And you can also fully customize one time and create it from the, from the start from the beginning. The, the package goes with a website created with package down. So, and in this, this website, you can access the main vignette, which explain, like, like I do today, the, the principle of the packages, the main functions. And it also have a complete, not a complete, but a large set of code examples to, to display the main map types. For example, this is the code for the base map. But you have also examples for proportional symbols, corporate map typology, the combination between two variables. So here is, it's an example with prop coral. So we will have proportional symbols and a corporate correlation within. And to you will have to select two variables. And for some elements to a vector of two here is for the legend position, for example. So, we also have a vignette to create to that detail, how to create mapping sets. And one thing I wanted to show you is that when you start an inset here. In fact, you can plot whatever you want in it. If it's, if it is in base or graphics, not GG plot. So here, I created an inset on the top right corner, and I just draw an Instagram of the of the places. So if you want to create a full map within the, the inset it's possible. There is also a vignette on teams where you have the parameters of the current teams explanation or all to modify an existing time. Or all to create it from from the beginning. And the last vignette is dedicated to explaining how to export maps, because, you know, it's, it can be difficult to export figures with, with our, and you have to decide to, sorry, to decide the weeds and the eight of the figure. So here with the MF export function that can export in PNG or SVG, you just select one side of the figure. And the other is automatically deduced. Take into account the ratio of an SF object. So here I just selected my SF object MTQ and select the width and the height is automatically automatically decided. So thank you. Yeah, you will have the, the link to the website, the link to this presentation, the GitHub repo for the packages, my Twitter handle and my link to my blog. So I'm happy to take questions. Thank you Timothy I think we've got time for one maybe two questions. So there is Shimela's asked, is it possible to do to, I suppose, incorporate the mapping arch EIS but what is the benefit of using the program? It's possible to do this in ArcGIS to create a map like this. What is the benefit of using R for this instead? Okay, that's a broad question. The point is to stay in R basically so I create my analysis with R, downloading data, tidy analysis and so on, and then I will go as far as I can with R to produce a production ready to edit a map to avoid to have to use some other tools like escape or so. And also of course for reproducibility, maps are figures like any figures like a graph or like a table it's a part of an analysis so it has to be reproducible as well. Exactly. And while Andrea starts sharing his screen, there's two questions I think you can answer quickly. The one is possible to export in tip format and then are you planning to incorporate ggplot2 in sets automatically in the future? Okay, so it's not possible to use to export tip format from now, but it will not be too difficult to create or to add this format to the MF export function. For now it's only PNG and SVG. And then, sorry if I missed it. You're planning to include ggplot in future? No, I don't. It's not compatible. Okay, thank you. Alright, thank you to Mathieu. Excellent. If anyone has more questions, you're welcome to still put them in the Q&A and you can answer them at the end. On the Slack channel also. Oh, exactly. Yes. Alright, so our next speaker is Andrea Ghilardi. He's from the University of Milan. He's a research fellow there. And he's going to share his presentation. Here's a pre-recorded presentation so he'll be playing the pre-recording on his computer. I am Andrea Ghilardi and I'm here to present a talk which is entitled OSM Extract and our package to download, convert and import large open-stream map datasets. So this first slide here summarizes the process that lead to the development of our package. The maintainers of this package are me and Robin Dobles. And I'm based in Milan. He's based in Litz. And we started developing, we started talking about this project more or less like two and a half years ago, chatting on a GitHub issue. And that we started talking about the road network, the road network analysis and open-stream map data. And that we spoke in person about this project and several other things was that user in to lose in 2019, I think, but I think that more or less in these days, we can celebrate the second birthday of this project. Anyway, as you may guess from the title of this presentation, I want to introduce another package that focuses on open-stream map data. So first, I want to briefly present the open-stream map. Open-stream map is an online database that provides open access, geographic and rich attribute data worldwide. Open-stream map is the so-called Wikipedia of maps and I think is the most important providers for raster and vector geographic data. The data stored in open-stream map as a wide range of physical and human features represent a wide range of physical and human features, including roads, rivers, buildings, coastal lines, political and administrative boundaries. The data in open-stream map is used by several public and private agencies. And I'm pretty sure it is used by hundreds of academic researchers in several fields. For example, at the end of this presentation, I will mention a few projects that I developed with Robin using open-stream map data. So, now we can mention, we can start talking about the package. The package can be installed from CRAN using the usual command, or the development version can be installed from GitHub using the installed GitHub function in remotes. The package is stored in our OpenSci meta repository. In this moment, I just want to briefly mention the fact that most of the function in our package works better if the data is stored in a so-called persistent directory. This is important since if you store the data in a persistent directory, then the function in our package do not need to re-download the data every time that you need to analyze the data. So, if you want to set the persistent directory, you should modify our environment file and add the string as defined in this slide. Then, after downloading the package, you can install the package, and you can load the package. And when you load the package, the package raises an important message regarding the license associated to the open-stream map data. This is quite important, especially if you're working in a for-profit capacity, since there are some, let's say, legal aspects correlated to this license. I don't want to speak at the moment about this license, but we included a message that should add several details about this license and some links for more information. So, the data in open-stream map is really rich, and I think that the open-stream map data can be obtained mainly in two ways. The first one is to use the so-called overpass API to generate a query that can be used to run a query against a server and upload some data. This approach is the approach adopted by the R package OSM data, and the other approach is using the so-called pre-formatted open-stream map extracts to access extracts stored by external providers. The most famous provider is probably Geofabric, and the idea behind these, let's say, providers is that they divide the word in several types, and for each type, they store the data, the open-stream map data for that type. For example, the type may correspond to the different countries or the regions within a country. For the moment, we support three providers that are called Geofabric, BBBike, and Open-StreetMap.it, but I mean, I don't want to add more details here about those providers. The only, if you want more details about these providers, we have vignette in the package that represents that, let's say, add more details. Okay, now we can start talking about more, we can start introducing more detail about this package. The most important function of this package is summarizing these slides, and the first four functions are used to manipulate the data. The fifth function is just a wrapper around all the other functions. To start, the first function, which is called OEMatch, is used to match an input place with one of the OSM extracts stored by the providers. Then, after matching the OSM extract, you want to download that OSM extract, and you can download that file using OEDownload. After downloading the file, you can convert the file between two different formats using OEDectorTranslate. This is quite important, since the data stored by the external providers is stored using, let's say, a size-efficient format, which is this protocol buffer format, PDF. The problem is that this format is quite inefficient for performing the reading and input-out operations. Now, we decided to implement a function that is used to convert this, let's say, size-efficient format to another more efficient format for our reading the data, which is this called the geo-package format. Finally, the OERid function is used just to import the data. And finally, the OERid function is just a wrapper around all the other functions that runs all the steps in sequence and import the data. And I will present some examples using OERid. This is the first example, and this example we run the function OERid, using specifying that we want to import the data, the OSM data, associated to the Isle of Wight. If we check the output, we can see that first the function says that the input was matched with the OSM extract associated with the Isle of Wight. Then it downloads data, then it converts the data from the protocol buffer format to the geo-package format. And finally, it reads the data. I didn't mention it before, but the data returned by the function in our package are already using this simple feature format, as you can read it from the bottom of these, let's say, these variables output. There are several ways to perform this, let's say, matching operation. The first one is to use a character string specifying in the place. You can also perform a so-called spatial matching operation, where you give a set of coordinates, and these sets of coordinates are paired using an special operation with the OSM extract. For example, this operation is the same as before, since these coordinates represent the same choice of the Isle of Wight. Again, the function says that the input was matched with the Isle of Wight. Then the function says that it skips downloading the file, it skips downloading the file, since the OSM extract associated with the Isle of Wight was already downloaded in the previous step. But so we do not need to rerun to rerun with the same file. Again, also the vectors of this operation are skipped, and I think that this underlined the fact that it's quite important if you use our package to set the so-called persistent directory. And the last, if you run, let's say, this matching operation and you want to match an operation using a string that can't be matched with any OSM extract. For example, here I'll say that they want to match using a character string that specifies the name of the small town where I live. Of course, there is no OSM extract for my specific town where I live. This is not important since when you specify a string that can't be matched with any OSM extract, then the function internally calls the Luminati API to match the, to geolocate the place. And then perform a special matching operation and to match this place with one of the OSM extract. For example, I live in the north of Italy, and you can see that this small town was geolocated, and the year set searching for the location online, and the input was matched with a north to west OSM extract in the north of Italy. The most one, so the two most important parts for using OSM extract are the matching operation and the vector translate operation. The vector translate operation are quite important since if you tune the vector translate operation, you can analyze the small parts or lack of large extracts without, let's say, overloading your session. For example, I presented an example where in the vector translate operations, we say we use the SQL, let's say the SQL seedbacks, to select only the highways, so the roads, where the highway attribute is not null. And on the right, we present the representation of the output. We can also create a more detailed example. For example, again, I also use the SQL language to select only the highways that belong to the primary secondary and tertiary categories. So let's say this is like a query to select the most important highway that belong to a specific area. And I mean, I think this is quite important. For example, I developed several projects in road safety analysis. And if you want to just focus on the most important roads in certain area, this is quite important if you want to select those roads. And for the moment, I cannot add more details on this topic, but we have in the packet, there is an extensive vignette that cover all these aspects. I just wanted to mention briefly mentioned some of the new features that we recently introduced in this package. And we recently introduced two parameters called the boundary and boundary type. These parameters can be used to perform the so-called spatial measures and we will see two examples in a few slides. We introduced the only match pattern function again to improve the matching operation we introduce and we get keys function or actually we improve the we get keys function and we created the logo. This is the logo. And so, for the boundary arguments, the boundary argument can be used to perform this so-called spatial filter. Here, the boundary argument should be specified using NSF object or simple feature column object using the a lot longer CRS. And in this case, if you specify the boundary arguments, then internally, the vector transient operation are modifying, saying that during the vector translate, the let's say the GDAL function should just select the features that intersect the boundary. Here we can see an example where we highlight the in red, the highways that intersect the circular buffer, which is the let's say the black circle, which is centered in the most important city in the Isle of Wight. We can also slightly modify the spatial filter to also perform a clipping operation. So if we specify boundary equal to an area and boundary type equal to clip source, clip SRC, then this operation is likely different than before, saying that we want to apply spatial filter and we want to clip the features. If you compare the two maps, it's quite clear that in the second map, we are also clipping the features according to these spatial filter. So to conclude, I want to mention a few projects that we developed using this package. In these slides, in this first slide, I want to just mention two projects that were developed by Robin. On the right, on the left, we represent the speed limits of the routes in London, while on the right, we have the classification of the cycle of the cycle is in the region of Norway, in the region of Norway. And this is just to underline that all the same data are literally widespread, they cover several areas in the world, and they also reach data, since the data stored in OpenStreetMap have several attributes describing this data. Two other projects that were developed by myself and Robin are summarizing these slides. So, on the left, we have a representation of a road safety model for the road network elites. On the right, we have the analysis, also we have an estimation of the ambulance intervention that occurred in the road network of Milan. In both cases, the road network was derived using OpenStreetMap data and impact our package. And we can see that on the left, we applied, let's say, a more strict filter selecting just the most important highways. So we applied this less detailed filter, but in both cases, I mean, these are examples where we applied both an SQL filter for selecting just specific roads, and a special filter for selecting only the roads that lie in the city. So, I have to conclude, and I want to thank you very much for attending this presentation. I also want to thank you and thank you a lot to the OpenSci and the reviewers that helped us develop this project and improve our package, and also with the other users that contributed to this project. So to conclude, I just want to say that I'm really happy that you plan to use OSM extractor for your project and then you plan to use OSM data. And if you plan to use this package for any particular reason, please let us know if we can improve our package in any way. Again, thank you very much for the attention. Thank you Andrea. It is a question for you. If anyone else has any other questions, welcome to put it in the Q&A and then we'll go back to Rosa and Robin's previous questions. Ah, there's two for you. Okay, so someone says hello. Hi Andrea, have you used the OSM data package and do you know how OSM extractor is different from it? It is an OSM data package and it took quite a long time to download the data so this package looks great. Maybe you can comment on that. Okay. I mean, yes, I use the OSM data package quite a lot. And exactly, I mean the problem that you mentioned is one probably the main reason that made us develop this package. I think that the approach that is adopted by OSM data is ideal for let's say smaller OSM extractor. For example, if you want to focus on a small city, the approach adopted by OSM data is far, far superior than our approach. And on the other end, if you want to download the OSM data for a larger region, I don't know a big city or a region or a country, then I think that our approach should or could perform better than the approach adopted by OSM data. We are running some comparison in these days and we plan to let's say publish the comparison in a few days. Wonderful. So keep your eye out for that. And there's another question here from Liam. And he says, are you considering adding functions for cleaning OSM data, for example, clean geometry names? I'm not sure what you mean by geometry names. But for the moment, there are no functions for cleaning geometry for cleaning OSM data, since I think that the package should let's say focus on only on importing the data. Which maybe we can consider import, we can consider adding functions to also manipulate the data while it's being imported. For example, I mean, I didn't mention the example here, but the, I mean, there are some possibilities to also filter and select just some or say some rows and some colors that satisfy some conditions. Excellent. So if I answer the question otherwise, please just send me a message. Liam, if you if you want to be more specific, you're welcome to add your Q&A again. Okay. Andrea, I'll call you again if there's some more questions for you. I'm going to go to the questions that we couldn't get to for Robin and Rosa, I see Robin is back. Robin, there's a question on your slope package. Which calculate the direction of the slope? The slope meaning is the slope downhill in a direction uphill in another direction. So I can try to answer that. Yeah, go ahead. Yeah, so that's an issue that's already open in our GitHub repo. It's something that's really important to have. If you are assessing a route, a choice, right? I mean, it's totally different to go uphill or downhill. So, yeah, that's a feature that we want to add maybe as an argument of slopes. Slopes XYZ, but I don't know if Robin wants to add something. But yeah, that's something that we already thought about that. And if you want to contribute, please go to the GitHub repo. Wonderful. Okay. Next question is from Michael. It says, nice talk and nice slopes package. Thanks. I am a hydrologist. Would it be possible to calculate slopes of catchments or hills? You mentioned rivers, but this is a line feature. And if so, is a DEM needed as a roster or can also online data like OSM be used? So I think Robin has a problem with Michael. Hi, can you hit me now? So, yeah, finally. Yeah, so that's a great question about using it in other areas. The one thing about the slopes package is it's quite specifically focused on linear features and often hydrologists use raster data to represent river systems. So it kind of depends on the input data. If you represent a river network as a series of lines, maybe the center points, maybe even the boundaries of the river, then absolutely it could be used to calculate the steepness of different segments of that river network, especially like when you've got a waterfall or something, I think it could be used to detect where you've got particularly steep sections like waterfalls. But it's not useful in the way that if you've got an entire system to assess the slope at every single point on the map, it's more about the slope associated with specific line streams and the center points, the central line of rivers. So, yeah, but we'd be interested to see how it could be used in river network analysis. So yeah, I mean, feel free to open an issue on the GitHub and we can take a look at that. So I hope that answers your question. Thank you, Robin. That is a last question from Omar. Does the package calculate the sloping degrees? So it currently calculates in percentage, right? Yeah, yeah, so currently it's a percentage which is for every step, for every 100 meters or every 100 centimeters along the path, how many centimeters does it go up? So 1% means that you're climbing 1% for every 100 meters you're going along. And you can ask why you can go over 100%. So 100% means 45 degrees. Simple trigonometry can translate. But it's actually a good point. People might find it useful to have the output in degrees. So I guess we could add an argument that's like units equals degrees. So that would be possible. So yeah, another good, I'd see that as a feature request and I think we could probably do that. Excellent. Any more questions from the attendees? Welcome to pop a question in. We have two minutes or so. I would like to ask a question of Lukas if he's still here. Yes. Is it a Netflix package? To what extent, how big a network can it handle? Before your computer is going to mine. It depends a bit, as always. I think Andrea might also be able to answer this more specifically because I know that he has, I think important, the whole city of London, which is quite a large network already like around 15,000 inches. And actually creating them network out of that goes quite fast. But of course then, if you want to do maybe intersections, things like this, that might take a long time. Yeah. Okay. I know I tried, for example, I ran several samples, several tests for models. And I mean, if you just wanted to let's say create the, let's call this a network object, it works also for the Greater London data. And the Greater London data is like more than 700,000 edges. So I mean, it's a monumental, let's say road network. The problem is that, of course, if you want to do something usable starting from that road network, it's totally unfeasible to work with a network that large. Usually I work with, let's say 50,000 edges, more or less. And in those cases, maybe it takes a few minutes to run, let's say all the centrality and all the measures, but it should run smoothly. I mean, all cases that let's say are relevant outside of just simply creating an object, I think they're running fine. So I think that he might be in on the problem is not just to create in the SF network object. The problem is that all the subsequent set are, let's say much more difficult than creating the SF network object. And I think it should scale and predefined. Thank you. Maybe one thing I can add to that is that our goal with the packages mainly to create a package and like to create an API that is easy to use and easy to understand. And I do think that if you want to work with a huge network and do many to many shortest path calculations on a really big network, then probably our package is not the tool that you should choose. Then you should choose maybe another R package like a torture or C or C++ routing that is really only focused on this task. And they can do that really, really fast. So if you have large networks and really one goal, I only want to do this. Then our package might not be your best choice because our focus is more setting, creating something general purpose that's easy to understand, easy to use and easy to work with SF and the other tidyverse packages. Wonderful. Okay. I'm going to thank the speakers for the session one last time. That was really excellent. And I feel quite honored to have been able to chair the session. So up next, there's a yoga session if you need some calm and meditation. There is also an R ladies community meeting. And then after that, there are some talks which you can also find on Slack. And just last thank you again. The sponsor for the session is RStudio. And I think we really appreciate that sponsorship. And I'll thank the speakers again. Thank you very much. And thank you for all the attendees today. Wonderful. Thank you. Thank you.