 Well, thanks everyone for joining. I hope you all have been having a great summit. It's a Friday afternoon. I really appreciate you all sticking around. Let's see. So today I'd like to talk about maps and making maps in the Open Visualization Group. Have you ever wanted to make a map to help city planners make safer roads or scientists understand climate change or just make cool map animations kind of like this? Well, making these map visualizations is now incredibly easy due to all the work we've been doing in Open Visualization, a collaboration space in the OpenJS Foundation, which is turning one in June. But before we dive into the progress we've made over the last year, I'd actually like to kind of set the stage with a bit of map making history. Cartography, the study and practice of creating maps, has been essential to human development for thousands of years. It's helped us navigate and explore the world. As well, it's helped us enhance our knowledge and perception. One of the first maps ever discovered were created by the ancient Babylonians who etched how they saw the world around them into clay tablets like this. To do this, they drew points of interests, boundaries, and lines by inscribing it into wet clay, which would then be baked to harden each time they wanted to make one of these maps. What started out by making clay can now be made incredibly easy due to all the technical advances we've made and are continuing to make, I mean, across the world but also in Open Visualization. So I'm Chris Gravang, a visualization engineer here at Joby Aviation. We're a company that is making an electrical ride sharing service for everyone. We spent the last 10 years designing and testing electric vertical takeoff and landing aircraft that can carry five people at speeds up to 200 miles per hour. As a visualization engineer here, my role is to work side by side with our creative teams and our engineering teams to help them explore and understand our data visually. Here's an example. Behind this behind me or to the side of me is a rendering of 10,000 plus miles of flight tests we conducted last year. And I visualized this using Open Visualization tools in the web browser. Tools like these allow us to rapidly assemble data animations and quickly iterate to create beautiful and polished geospatial art. Now, let's dive a little deeper into what Open Visualization is. Open Visualization, or OpenViz, is a collaboration space established by the OpenJS Foundation. OpenViz and OpenViz were a group of developers that build data visualization tools together out in the open. The projects we're developing are KeplerGL and VizGL. KeplerGL is a powerful geospatial analytics tool for large data sets. And VizGL is a suite of JavaScript visualization frameworks. I'll go into more detail about what these projects are later on. They were created in open source at Uber and then were donated to the OpenJS Foundation to be openly governed. Often, though, when I mention open governance, I'm asked what this means. Well, basically, it's open source, as you know it, but even more open. Often with open source projects sponsored by private companies, decisions are made behind closed doors where people can abruptly leave without really transferring the knowledge to the people out in the open that want to continue the project. Open governance is where a foundation has ownership and where all of our thinking and planning around the code is always available to the public. We make it easy for anyone to contribute whether you'd like to remix the code or simply add your own ideas. Most recently, we held our first ever OpenViz collaborator summit in Madrid last October, where we brought together our international geospatial community to knowledge share and essentially collaborate. We had about 100 contributors join us and a fantastic lineup of speakers to facilitate workshops and talks. I'd like to highlight two of our talks. Paul Taylor, an engineer at NVIDIA, made it a lot easier to optimize data analysis and visualizations with the latest CUDA GPUs. Before, DECGL Performance was limited by web browsers because it's a JavaScript library. And we'd have to work with this and add in complicated tiling techniques and load a lot of data at once. But now you can run DECGL using the latest native desktop APIs within Node.js, which lets you render a whole lot data way faster. For example, in this animation, he parsed a huge 10 million data point data set for New York City to render an extremely detailed map and did it without any tiling and did it in real time like during the talk. We also heard from Kyle Barron, an engineer at Foursquare. Kyle showed us how to use the Apache Arrow and the Apache Parquet projects in your geospatial tech stack. And he showed us that it's pretty easy to do and can lead to some really impressive results. So he did this with DECGL. DECGL offers a low-level binary interface for really data-intensive applications. But writing a custom binary implementation is pretty difficult to do for day-to-day applications. But now we have geo-parquet and geo-arrow in JavaScript, and it can be done with just a couple lines of code. For example, here, he rendered a million building footprints in Utah all at the same time in observable, which is in the browser. So we kind of had a really awesome example of how you can use Node.js and desktop rendering environments and also we're continuing to push what can be done still within the browser without any special graphics hardware or specific vendor hardware or setup. While there, we also heard about the contributions from other companies, Carto, Microsoft, groups like Map Libre. We thought we heard what all the different kind of contributions they've made to our projects. It was really cool to see just how diverse these projects have gotten. And most importantly, meet face-to-face with everyone. I also wanna shout out Carto for hosting us in Madrid. It was really great that we got to use their space. Now, let's dive into the projects we actually make for developers like you. We build a variety of JavaScript libraries that work incredibly well together and easily integrate into your existing tech stack. Our flagship framework, DECGL, is one of the top web-based visualization libraries with over 136,000 weekly downloads and it's doubling its growth compared to last year. KeplerGL is a web-based application for exploratory geospatial visualization and it's built on top of DECGL and its companion VisGL frameworks. It allows technical and non-technical audiences to visualize all kinds of data easily with all kinds of data easily with a simple interface. You can just drag some data onto your map and add layers and filters to gain really quick insights. It's a major showcase of the Open Visualization software stack and it brings together multiple visualization frameworks into one powerful application. And KeplerGL offers itself both as an application and actually as a UI library. The application has around like 30,000 weekly users and has been integrated into all kinds of places like Jupyter notebooks, Jupyter Labs, VS Code, Tableau and even Apache SuperSet. Many companies in the mobility space are using KeplerGL for geospatial analysis. It's really easy to get started. It's free. There's no signup required. You can just go to the website KeplerGL and start messing around. It's also been used as a geospatial user interface library by many companies such as Foursquare and Carto. They use it to build their own customized applications. They can do this because we've added React component factories that can inject custom components into the UI and also a system of callbacks to integrate the state changes of the KeplerGL component with the rest of your application. All in all, these mechanisms help developers avoid forking KeplerGL, which reduces all that long-term maintenance challenges you might get and also encourages this collaboration amongst different groups that are using it as a library. By far, the biggest investment KeplerGL received last year was in converting it to TypeScript, which in itself represented over a person year of work. With TypeScript, developers can look up type definitions to quickly implement customizations and have a strong safety net when making changes to the code. The KeplerGL code base also keeps growing as well. So we broke up the big monolithic modules into independent smaller ones published in NPM. That means developers should now be able to install only what they want when they're building with Kepler. We also make the base map library in KeplerGL available as its own React component. React MapGL is a user-friendly API wrapper for React. And it works with Mapbox and now as of this year with Nap Libre. We released version seven this year, which was a complete rewrite of the library addressing many long-standing issues we had in V5 and V6. The rewrite also reduced the bundle size by 74% added support for any Mapbox compatible plugins like Mapbox GL Draw, which you can use for drawing polygons and stuff on your maps, which is a couple lines of code. And also it's paved the way for us to add compatibility for more Map libraries. Like later this year, we're adding a Google Maps React wrapper, which should work really similarly to the existing wrappers. As I mentioned earlier, DECGL is our flagship framework for visualizing and doing exploratory data analysis of really large data sets. So when you have just a map, that's where you might just use React MapGL to just get a base map. But when you start to add a lot of complexity and customization application, DECGL comes in as a solution for you to add all kinds of layers. It has this extensive catalog of these composable layers and it allows you to create really complex visualizations and then also to make it, it makes it easy to then package and share those visualizations as like reusable layers for other people that within like your group or even online that might want to use them. We've designed the API to reflect a reactive programming paradigm. It was originally written to just work with ReactJS, but for years now it just works in any environment. It's a pure JS application built with WebGL. So whether it's a vanilla JS or React interface, it can efficiently handle and render everything with all the heavy data loading and everything using your GPU with WebGL. We made some major upgrades to the tools that support the development and publishing of DECGL over the last year. All examples are now bootstrapped with VIT, which really sped up how quickly it can take, really reduces the amount of time it takes to get started with any of our examples. We are also now pre-bundling our scripts with ESBuild and using DocuSource for our website. It allows us to iterate a lot faster. It makes it a lot easier for first time users to get started, whether they're contributing or just using the library. It was a tremendous effort to also convert DECGL code base into TypeScript. We did it not just for TypeScript users to easily consume the library, but also for ourselves and for other contributors to improve the overall robustness and maintainability of our very large code base. Over the last year, in addition to developer experience enhancements, we focused pretty heavily on what we call DECGL extensions. DECGL extensions like the collision filter extension and the terrain extension. You can think of extensions as bonus features you can optionally add on to core DECGL layers. The collision filter extension allows layers to hide features which overlap with other features. An example is like a really dense scatter plot layer with many points that overlap. By using this extension, points that collide with each other are hidden such that only the ones that don't collide are shown. And these collisions are computed on the GPU in real time, which allows the collisions to be updated smoothly on every frame. We're talking like 60 frames per second. This stuff used to be done on the CPU side and every time you'd go back and forth between CPU and GPU, you would be wasting frames and kind of leading to like a very staggered experience. It's very, very smooth to do this now. And it's generic. It works with like all the different layers in the library, whether it's text or scatter plots or different kinds. The terrain extension, I'm really excited about. This one renders otherwise 2D data along a 3D surface. For example, if you have GeoJSON of a city, which GeoJSON is like building footprints or different lines or things. If you have that of like city streets and buildings, you can overlay that on top of something like an elevation model. It's really useful when viewing a mixture of 2D and 3D data sources. And the repositioning of all these geometries is again performed on the GPU. So you can do it dynamically in real time, have it very interactive. And as you're designing your maps, you don't really need to think about the complexities of offsetting all these geometries between 2D, 3D space. And kind of along the same lines, I'd actually like to share a really exciting new integration. On Wednesday, like two days ago, the Google Maps platform released photorealistic 3D tiles with a data set that's really comparable to Google Earth, which means you can now use the DECGL 3D tiles layer to render entire cities in amazing detail. And when you combine that with the terrain extension, it allows you to overlay 2D layers onto 3D cityscapes. And all this can be done at runtime with like very little code. I think that this development opens up like a huge wealth of exploratory analysis possibilities and enables the addition of like fully textured buildings and terrain in environments to any data visualization. Also to me, this announcement signifies more than just a technological advancement. It represents a growing trend of synergies across various data providers. For years, the mapping landscape has been characterized by unique solutions from different vendors like Google and Mapbox with their own distinct visualizations. Now, we're beginning to see how DECGL's open governance model and open geospatial consortium standards can harmonize these diverse solutions and lead to a more collaborative and integrated mapping ecosystem. I'd like to extend a huge thank you to the open visualization members at Carto and our technical steering committee for their contributions and commitment to open source and open governance here. So really, really cool collaboration and I'm really excited to see what people make with this. This year, we also had a new framework join us. Flow Map GL is a framework for geospatial flow maps. They're adding a variety of DECGL layers for all kinds of flow data. So if you have, a flow is like if you have people moving around a city, like if you have maybe a subway system where you have an understanding of like where people are going from A to B around a city, you not only want to know the location, the geospatial proximity between these things, but you have a really big interest in the kind of temporal aspect of that, the stuff that changes over time. So Flow Map is a way to in one picture see all that stuff that's changing, not only over kind of a whole area of your city, but also over time as well. I'd also like to announce that we're organizing a second collaborator summit this September. This time it's gonna be in New York City. Stay tuned for more details on this and how to attend. We're just getting started with all the organization, but I'm really excited for this. It's gonna be another really great get together with everybody. So in summary, there's never been a more exciting time to explore geospatial development. OpenViz is a collaborative group of developers building JavaScript data visualization tools in the open. We work on projects that make it easy to add maps and data visualizations to your web applications. And if you're interested in learning more, feel free to reach out to me directly, join us on Slack, or come to any of our bi-weekly community meetings. Thanks. Happy to take questions now and also stick around afterwards in case you'd like to talk more. So first of all, wow, this is extremely exciting. I love data visualization. I've used D3 and other libraries a lot and 3JS and WebGL as well, but for some reason it was not on my radar at all. So I'm obviously not looking at the right Twitter feed. So first of all, thank you for creating this and the large team behind it. I'm definitely going to be using it on AR projects and on web apps. I'm wondering, so I'm a sucker for performance and I put my own game engine even in Vulkan. Not that I can use that for web purposes anyway. I'm wondering if you're using WebGPU or thinking about that in the future. And what is the performance level of this and is there a next level that you can reach through WebGPU or are we already really amazing at the amount of rendering you can do? Yeah, it's a really great question. WebGPU for those that may not have heard of it is totally funny if you haven't just came out a couple of weeks ago officially in Google Chrome. It's kind of the next generation API, like WebStandard API for accessing GPU resources. And it's a pretty big departure from the OpenGL stuff that WebGL was built on, but the same standards group is behind WebGPU. So yeah, it's something that we, so right now, DeckGL and LumaGL, it's all on V8, but V9 is actually scheduled to add WebGPU support to LumaGL, the low level rendering library and by extension over time, DeckGL. Because WebGPU kind of changes its shader language that's being used and a lot of different libraries all need to come up together in order to move to a big technological change like that, I think it'll take some time. But our approach to this so far has been with LumaGL, you're gonna be able to have a standard interface for either accessing WebGL or WebGPU when you wanna do different kinds of rendering or just any kind of GPU access. And so by introducing that standard interface, which will be a complete breaking change to the API for LumaGL, but that's why it's V9, major change, we will have DeckGL start to use that. Now at the beginning, it's gonna just use the WebGL path that it's already got. But over time, I see that changing and we really wanna see how, like see like a smooth transition from WebGL to WebGPU ultimately, because it's a modern API. It's, people always have like different, GPU APIs are never perfect. There can be very painful to use. But what I've heard so far is that WebGPU is like the least painful it could be. And people are pretty excited to use it. So given that, I think we really, really wanna see DeckGL be like a flagship WebGPU library ultimately. We do some really interesting stuff. Like you saw like on the, well we didn't show, I didn't really speak about this because we didn't do much work on it in the last year, but we do a lot of interoperability between different WebGL mapping libraries. So like Mapbox or Map Libre, they are WebGL based. And then you have Google Maps with their own JavaScript library. I kinda mentioned that we're doing, we can like render their Google Earth stuff in DeckGL now. That's actually using a DeckGL layer. But separately from that, a year ago, we had already made a, we already done a different collaboration with them, both Mapbox and with Google Maps, where you can render DeckGL layers inside of those libraries WebGL contexts. And what that lets you do is do occlusion. So if you have like a building in Mapbox and you wanna have like a bunch of data from DeckGL, but that building's supposed to like cover it because it's physically in front of your data, that occlusion will happen inside of DeckGL and Mapbox or Google Maps. But in order for that to all work, they all have to be using that same low level thing. They all have to be like WebGL based. So I see that transition where you wanna start, like to be able to support these use cases of what we call interleaved rendering between these libraries, that will probably be that long tail where we all kinda need to come up to WebGPU in order to still be able to have that compatibility. And of course, WebGPU is forward facing and we will get there, we all will. I'm wondering if you wanna speak to, I'm sure I could research it later as well, but I'm wondering if you wanna speak more to the performance that it already enables right now in WebGL, like if they have any numbers or like just like what kind of frames per second and what kind of device or, because of course like for especially for big data, which I think is a big part of this project, what kind of performance are we getting right now in the libraries? Yeah, that's a really, really good question. I'm actually not prepared to answer it. Mostly because this is so new that I haven't even really gotten a chance to see it yet. But I'm really interested in learning myself. I think that what we've seen is that with libraries like GeoAero and GeoParquette, what we're realizing is that the bottleneck is in many applications, and out there in the world, like real practical applications. The bottleneck that most web-based data viz applications have is still not actually the WebGL layer in a lot of cases. It's what comes before that. So whenever you, like somehow you have to get your data into the GPU through JavaScript. And it's that through JavaScript part where you might be still packing data in a pretty relatively inefficient way compared to the array buffers that are actually running on a GPU. You might be doing a JSON or something very rest-based or CSVs, all these kinds of like uncompressed formats that are really easy to use, but not super efficient. So what we're really interested in seeing is basically like a greater adoption and support for these binary implementations like GeoAero and GeoParquette. I think that that is gonna be what can really bridge or kind of lift up the total amount of data that you can show and interact with at like 60 frames per second for a while. And then what'll happen, I think, is then another boundary is gonna be hit and people are gonna want, of course, to do more as they can. And then we'll see that WebGPU will come in. I've also heard just in general, WebGPU is gonna be a lot better for general computation and I didn't show anything like this today, but there's a DECGL library out there that can take this really kind of niche. But if you have a satellite taking imagery over the Earth, they are probably taking imagery not only in like pixels that you can see, you know, like RGB, but they're doing it in different bands of light, like infrared or ultraviolet, maybe these things. And they use that to do things like detect wildfires or see how much vegetation is around at any given point. Like, is there a drought somewhere? You can tell that by collecting different kinds of light. Now, if you're a data analyst, ideally you'd like to kind of explore and tweak all the knobs of all these different bands of light and turn it into something you can see. And that is essentially like real-time graphics, what do they call it? Like kind of real-time graphics analytics. And that right now is being done in WebGL and it's being done in a very hacky way. Like you have to express all your numbers as like RGB values when what your numbers represent are not actually like color values, they're something numerical. WebGPU is changing a lot of that and making it so that you can do a lot more memory sharing between different layers and stuff like that. And so I think like whatever kind of barriers those projects are currently hitting are coming down with the transition to WebGPU. And I think this, I can only imagine there's gonna be with all the kind of generative art and different kind of AI technologies that are coming out now. Like we're gonna see, you're gonna be able to do more and more inside of a browser and you're gonna be able to wanna visualize more and more. Like I don't know who's gonna do it, but someone will make an AI generated real-time base map. That's one of my use cases by the way. I work with AI and diffusion models and I wanna be able to in browser not just render the map but also generate the layers on top of it. And so React is what I use as well. Like today you kind of have like a satellite image or a bunch of different disparate sources that can tell you roughly what is in the map you're looking at. But it's really coarse what you're often looking at. It might just be saying this general area has trees or it's green or it has some water. But if you wanna see something a lot more rich or you have some information about what that ecosystem really has, I don't know, I could see that going into a generative art API and then being able to do it all on client side would be really, really cool. So that I think is, you know, this is very far out probably, but that's kind of a future that I'd like to see. Thanks. Oh yeah, so if you'd like to join our meetings, we meet bi-weekly on Zoom and it's all on the OpenJS calendar. So you can just go to the general OpenJS calendar and we're on there, Open Visualization Community Meeting.