 Hello, everybody. Thank you for joining us today for the Virtual Vertica BDC 2020. Today's breakout session is entitled, Extending Vertica with the Latest Vertica Ecosystem and Open Source Initiatives. My name is Sue LeClaire, Director of Marketing at Vertica, and I'll be your host for this webinar. Joining me is Tom Wall, a member of the Vertica Engineering team. But before we begin, I encourage you to submit questions or comments during the virtual session. You don't have to wait. Just type your question or comment in the question box below the slides and click Submit. There will be a Q&A session at the end of the presentation. We'll answer as many questions as we're able to during that time. Any questions that we don't get to, we'll do our best to answer them offline. Alternatively, you can visit Vertica forums to post your questions after the session. Our engineering team is planning to join the forums to keep the conversation going. Also a reminder that you can maximize your screen by clicking the double arrow button in the lower right corner of the slides. And yes, this virtual session is being recorded and will be available to view on demand later this week. We'll send you a notification as soon as it's ready. So, let's get started. Tom, over to you. Hello everyone, and thanks for joining us today for this talk. My name is Tom Wall and I am the leader of Vertica's ecosystem engineering team. We are the team that focuses on building out all the developer tools, third-party integrations that enables the software ecosystem that surrounds Vertica to thrive. So today, we'll be talking about some of our new open-source initiatives and how those can be really effective for you and make things easier for you to build and integrate Vertica with the rest of your technology stack. We've got several new libraries, integration projects and examples, all open-source to share, all being built out in the open on our GitHub page. Whether you use these open-source projects or not, there's a very exciting new effort that will really help to grow the developer community and enable lots of exciting new use cases. So, every developer out there has probably had to deal with a problem like this. You have some business requirements to maybe build some new Vertica-powered application. Maybe you have to build some new system out to visualize some data that's managed by Vertica. Through various circumstances, lots of choices might be made for you that constrain your approach to solving a particular problem. These requirements can come from all different places. Maybe your solution has to work with a specific visualization tool or web framework because the business is already invested in the licensing and the tooling to use it. Maybe it has to be implemented in a specific programming language since that's what all the developers on the team know how to write code with. While Vertica has many different integrations with lots of different programming language and systems, there's a lot of them out there and we don't have integrations for all of them. So, how do you make ends meet? You don't have all the tools you need. Well, you have to get creative using tools like high ODBC, for example, to bridge between programming languages and frameworks to solve the problems you need to solve. Most languages do have an ODBC-based database interface. ODBC is our C library and most programming languages know how to call C code somehow. So that's doable, but it often requires lots of configuration and troubleshooting to make all those moving parts work well together. So that's enough to get the job done, but native integrations are usually a lot smoother and easier. So rather than, for example, in Python, trying to fight with PyODBC to configure things and get Unicode working and to compile all the different pieces the right ways to make it all work smoothly, it would be much better if you could just pip install a library and get to work. And with Vertica Python or new Python client library, you can actually do that. That story, I assume, probably sounds pretty familiar to you. It sounds probably familiar to a lot of the audience here because we're all using Vertica and our challenge as big data practitioners is to make sense of all this stuff despite those kind of technical and non-technical hurdles. Vertica powers lots of different businesses and use cases across all kinds of different industries and verticals. While there's a lot different about us, we're all here together right now for this talk because we do have some things in common. We're all using Vertica and we're probably also using Vertica with other systems and tools too because it's important to use the right tool for the right job. That's kind of a founding principle of Vertica and it's true today too. In this constantly changing technology landscape, we need lots of good tools and well-established patterns, approaches and advice on how to combine them so that we can be successful doing our jobs. Luckily for us, Vertica has been designed to be easy to build with and extended in this sort of fashion. Databases as a whole have had this goal from the very beginning. They solved the hard problems of managing data so that you don't have to worry about it. Instead of worrying about those hard problems, you can focus on what matters most to you and your domain. So implementing that business logic, solving that problem without having to worry about all of the intense, sometimes details about what it takes to manage a database at scale. With the declarative syntax of SQL, you tell Vertica what the answer is that you want. You don't tell Vertica how to get it. Vertica will figure out the right way to do it for you so that you don't have to worry about it. So this SQL abstraction is very nice because it's a well-defined boundary where lots of developers know SQL and it allows you to express what you need without having to worry about those details. So we can be the experts in data management where you worry about your problems. This goes beyond what's accessible through SQL with Vertica. We've got well-defined extension and integration points across the product that allow you to customize this experience even further. So if you want to do things like write your own SQL functions or extend database operators with UDXs, you can do so. If you have a custom data format that might be a proprietary format or some source system that Vertica doesn't natively support, we have extension points that allow you to use those. You make it very easy to do parallel, massive data movement, right? Loading into Vertica but also to export Vertica to send data to other systems. And with some new features in 10, we also can do the same kinds of things with machine learning models importing and exporting to tools like TensorFlow, right? And it's these integration points that have enabled Vertica to build out this open architecture and a rich ecosystem of tools, both open source and closed source, right, of all sorts of different varieties that solve all kinds of different problems that are kind of common in this big data processing world, right? Whether it's open source streaming systems like Kafka or Spark, right? Or more traditional ETL tools, right, on the loading side, but also, right, BI tools and visualizers and things like that to view and use the data that you keep in your database on the right side, right? And then, of course, Vertica needs to be flexible enough to be able to run anywhere, right? So you can really take Vertica and use it the way you want it to solve the problems that you need to solve. So Vertica has always employed open standards and integrated with all kinds of different open source systems. What we're really excited about to talk about now is that we are taking our new integration projects and making those open source, too. In particular, we've got two new open source client libraries that allow you to build Vertica applications for Python and Go. These libraries act as a sort of foundation for all kinds of interesting applications and tools, right? Upon those libraries, we've also built some integrations ourselves and we're using these new libraries to power some new integrations with some third-party products. Finally, we've got lots of new examples and reference implementations out on our GitHub page that can show you how to combine all these moving parts in exciting ways to solve new problems, right? And the code for all these things is available now on our GitHub page and so you can use it however you like and even help us make it better, too. So the first such project that we have is called Vertica Python. Vertica Python began at our customer at Uber, right? And then in late 2018, we collaborated with them and we took it over and made Vertica Python the first official open source client for Vertica. You can use this to build your own Python applications or you can use it via tools that were written in Python, right? Python has grown a lot in recent years and it's a very common language to solve lots of different problems and use cases in the big data space from things like DevOps and data science or machine learning or just for homegrown applications. We use Python a lot internally for own QA testing and automation needs and with the Python 2 end of life that kind of happened at the end of 2019, it was important that we had a robust Python solution to help migrate our internal stuff off of Python 2 and also to provide a nice migration path for all of you, our users that might be worrying about the same kind of problems with their own Python code. So Vertica Python is used already for lots of different tools including Vertica's admin tools now starting with 9.31. It was also used by Datadog to build a Vertica Datadog integration to monitor your Vertica infrastructure within Datadog. So here's a little example of how you might use the Python client to do some work. So here we open a connection, we run a query to find out what node we're connected to and then we do a little data load by running a copy statement. And this is designed to kind of have a familiar looking feel if you've ever used a Python database client before. So we implement the DBAPI 2.0 standard and it feels like a Python package, right? So that includes things like it's part of the centralized PIP package winners. So you can just PIP install this right now and go start using it. We also have our client for GoLang. So this is called Vertica SQL Go. And this is a very kind of similar story just in a different context or the different programming language. So Vertica SQL Go began as a collaboration with the microfocus SecOps group who builds microfocuses security products some of which use Vertica internally to provide some of those analytics. So you can use it to build your own apps in the Go program language but you can also use it via tools that are written in Go. So most notably we have our Grafana integration which we'll talk a little bit more about later that leverages this new client to provide Grafana visualizations for Vertica data. And Go is another kind of rising popularity programming language because it offers sort of an interesting balance of different programming design trade-offs. So it's got good performance, good concurrency and sort of memory safety. And we liked all those things and we are using it to power some internal monitoring stuff of our own. And here's an example of the code that you can kind of write with this client. So this is Go code that does a similar thing. It opens a connection, it runs a little test query and then it iterates over those rows processing them using Go data types. You get that native looking feel just like you do in Python except this time in the Go language. And you can go get it the way you usually package things with Go by running that command there to acquire this package. And it's important to note here for these two projects we're really doing open source development. We're not just kind of putting code on our GitHub page. So if you go out there and look you can see that you can ask questions, you can report bugs, you can submit pull requests yourselves and you can collaborate directly with our engineering team and the other Vertica users out on our GitHub page. Because it's out on our GitHub page it allows us to be a little bit faster with the way we ship and deliver functionality compared to the core Vertica release cycle. So in 2019 for example as we were building features to prepare for the Python 3 migration we shipped 11 different releases with 40 customer reported issues filed on GitHub. That was done over 78 different pull requests and with lots of community engagement as we do so. So lots of people are using this already. We see as our GitHub page last showed with about 5,000 downloads of this a day of people using it in their software. Again we want to make this easy not just to use but also to contribute and understand and collaborate with us. So all these projects are built using the Apache 2L license. The master branch is always available and stable with the latest, greatest functionality and you can always build it and test it the way we do so that it's easy for you to understand how it works and to submit contributions or bug fixes or even features. It uses automated testing both for locally and pull requests and for Vertica Python it's fully automated with Travis CI. So we're really excited about doing this and we're really excited about where it can go in the future because this offers some exciting opportunities for us to collaborate with you directly than we kind of have ever before. You can contribute improvements and help us guide the direction of these projects but you can also work with each other to share knowledge and implementation details and various best practices. So maybe you think, well I don't use Python I don't use Go so maybe it doesn't matter to me but I would argue it really does matter because even if you don't use these tools and languages there's lots of amazing Vertica developers out there who do and these clients do act as low level building blocks for all kinds of different interesting tools both in these Python and Go worlds but also well beyond that because these implementations and examples really generalize to lots of different use cases and we're going to do a deeper dive now into some of these to understand exactly how that's the case and what you can do with these things. So let's take a deeper look at some of the details of what it takes to build one of these open source client libraries. So these database client interfaces, what are they exactly? Well we all kind of know SQL but if you look at what SQL specifies it really only talks about how to manipulate the data within the database. So once you're connected and in you can run commands with SQL but these database client interfaces address the rest of those needs. So what does the programmer need to do to actually process those SQL queries? And so these interfaces are specific to a particular language or technology stack but the use cases and the architectures and design patterns are largely the same between different languages. They all have a need to do some networking and connect and authenticate and create a session. They all need to be able to run queries and load some data and deal with problems and errors. They also have a lot of metadata and type mapping because you want to use these clients the way you use those programming languages which might be different than the way that Vertica's data types and Vertica's semantics kind of work. So some of these client interfaces are truly standards and they are robust enough in terms of what they design and call for to support a truly pluggable driver model where you might write an application that codes directly against the standard interface and you can then plug in a different database driver like a JDBC driver to have that application work with any database that has a JDBC driver. So most of these interfaces aren't as robust as a JDBC or ODBC but that's okay because as good as this standard is every database is unique for a reason and so you can't really expose all of those unique properties of a database through these standard interfaces. So Vertica's unique in that it can scale to the petabytes and beyond and you can run it anywhere in any environment whether it's on-prem or on-clouds. So surely there's something about Vertica that's unique and we want to be able to take advantage of that fact in our solutions. So even though these standards might not cover everything there's often a need in common patterns that arise to solve these problems in similar ways. When there isn't enough of a standard to find those common semantics that different databases might have in common what you often see is tools will invent plug-in layers or glue code to compensate by defining sort of an application-wide standard to cover some of these same semantics. Later on we'll get into some of the details and show off what exactly that means. So if you connect to a Vertica database what's actually happening under the covers? You have an application, you have a need to run some queries so what does that actually look like? Well probably as you would imagine your application is going to invoke some API calls in some client library or tool. This library takes those API calls and implements them usually by issuing some networking protocol operations communicating over a network to ask Vertica to do the heavy lifting required for that particular API call. So these APIs usually do the same kinds of things although some of the details might differ between these different interfaces but you do things like establish a connection run a query, iterate over your rows and manage your transactions, that sort of thing. Here's an example from Vertica Python which just kind of goes into some of the details of what actually happens during the Connect API call. And you can see all these details in our GitHub implementation of this. There's actually a lot of moving parts in what happens during a connection. So let's walk through some of that and see what actually goes on. I might have my API call at this where I say Connect and I give it a DNS name and I give it an entire cluster and I give it my connection details my username and password and I tell the Python client to get me a session give me a connection so I can start doing some work. Well, in order to implement this what needs to happen? First, we need to do some TCP networking to establish our connection. So we need to understand what the request is where you're going to connect to and why by parsing the connection string and Vertica being a distributed system to provide high availability. So we might need to do some DNS lookups to resolve that DNS name which might be an entire cluster and not just a single machine so that you don't have to change your connection string every time you add or remove nodes to the database. So we do some high availability some DNS lookup stuff and then once we connect we might do load balancing too to balance the connections across the different initiator nodes in the cluster or in a subcluster as needed. Once we land on the node we want to be at we might do some TLS to secure our connections and Vertica just supports the initiator TLS protocols so this looks pretty familiar if you've ever used TLS anywhere before so you're going to do a certificate exchange and the client might send the server certificate to and then you're going to verify that the server is who it says it is so that you can know that you trust it. Once you've established that connection and secured it then you can start actually beginning to request a session within Vertica. So you're going to send over your user information like here's my username here's the database I want to connect to you might send some information about your application like a session label so that you can differentiate on the database with monitoring queries what the different connections are and what their purpose is and then you might also send over some session settings to do things like auto commit to change the state of your session for the duration of this connection so that you don't have to remember to do that with every query that you have. Once you ask Vertica for a session before Vertica will give you one it has to authenticate you and Vertica has lots of different authentication mechanisms so there's a negotiation that happens there to decide how to authenticate. Vertica decides based on who you are where you're coming from on the network and then you'll do an off specific kind of exchange depending on what the auth mechanism calls for until you are authenticated. Finally, Vertica trusts you and lets you in. So you're going to establish a session in Vertica and you might do some note keeping on the client side just to know what happened. So you might log some information you might record with the version of the databases you might do some protocol feature negotiation so if you connect to a version of the database that doesn't support all these protocols you might decide to turn some functionality off and that sort of thing. But finally after all that you can return from this API call and then your connection is good to go. So that connection is just one example of many different APIs and we're excited here because with Vertica Python we're really opening up the Vertica client wire protocol for the first time and so if you're a low level kind of Vertica developer and you might have used Postgres before you might know that some of Vertica's client protocol is derived from Postgres but they do differ in many significant ways and this is the first time we've ever kind of revealed those details about how it works and why. So not all Postgres protocol features work with Vertica because Vertica doesn't support all the features that Postgres does. So if you, Postgres for example has a large object interface that allows you to stream very wide data values over. Whereas Vertica doesn't really have very wide data values, right? You have 30, you have long bar charts but that's about as wide as you can get. Similarly, the Vertica protocol supports lots of features not present in Postgres, right? So load balancing for example which we just kind of went through an example of Postgres is a single node system. It doesn't really make sense for Postgres to have load balancing but load balancing is really important for Vertica because it is just Vertica Python serves as an open reference implementation of this protocol with all kinds of new details and extension points that we haven't revealed before. So if you look at these boxes below all these different things are new protocol features that we've implemented since August 2019 out in the open on our GitHub page for Python. Now the Vertica SQL Go implementation of these things is still in progress but the core protocols are there for basic kind of query operations. There's more to do there but we'll get there soon. So this is really cool, right? Because not only do you have now a Python client implementation and you have a Go client implementation of this but you can use this protocol reference to do lots of other things too, right? The obvious thing you could do is build more clients for other languages, right? So if you have a need for a client in some other languages that Vertica doesn't start, support yet. Now you have everything available to solve that problem and to go about doing so if you need to. But beyond clients, right, it's also used for other things. So you might use it for mocking and testing things. So rather than connecting to a real Vertica database you can simulate some of that. You can also use it to do things like query routing in proxies. So Uber for example, this log here in this link tells a great story of how they route different queries to different Vertica clusters by intercepting these protocol messages parsing the queries in them and deciding which clusters to send them to. So a lot of these things are kind of just ideas today. But now that you have the source code there's no limit in sight to what you can do with this thing. And so we're very interested in hearing your ideas and requests and we're happy to offer advice and collaborate on building some of these things together. So let's take a look now at some of the things we've already built that do these sorts of things. So here's a picture of Vertica's Grafana connector with some data powered from an example that we have in this blog link here. So this has sort of an internet of things kind of use case to it where we have lots of different sensors recording flight data feeding into Kafka which then gets loaded into Vertica and then finally it gets visualized nicely here with Grafana. And Grafana's visualizations make it really easy to analyze the data with your eyes and see when something happens. So in these highlighted sections here you notice a drop in some of the activity that's probably a problem worth looking into. It might be a lot harder to see that just by staring at a large table yourself. So how does a picture like that get generated with a tool like Grafana? Grafana specializes in visualizing time series data and time can be really tricky for computers to do correctly. You have time zones, daylight savings, leap seconds, negative infinity, time stamps, please don't ever use those. And every system it wasn't hard enough just with those problems. What makes it harder is that every system does it slightly differently. So if you're querying some time data how do we deal with these semantic differences as we cross these domain boundaries from Vertica to Grafana's back end architecture which is insulated and go out to Grafana's front end which is implemented with JavaScript. If you kind of read this from bottom up in terms of the processing first you select the time stamp and Vertica's time stamp has to be converted to a go time object. And we have to reconcile the differences that there might be as we translate it. So go time has a different time zone specifier format and it also supports nanoseconds precision while Vertica only supports microseconds precision. So that's not too big of a deal when you're querying data because you just see some extra zeros in those fractional seconds. But on the way in if you're loading data we have to find a way to resolve those things. Once it's into the go process it has to be converted further to render in the JavaScript UI. So there the go time object has to be converted to a JavaScript AngularJS date object. And there too we have to reconcile those differences. So a lot of these differences might just be presentation and not so much the actual data changing but you might want to choose to render the date in a more human readable format like we've done in this example here. Here's another picture. This is another picture of some time series data and this one shows you can kind of actually write your own queries to provide answers. So if you look closely right here you can see there's actually some functions that might not look too familiar with you if you know Vertica's functions. Vertica doesn't have a dollar underscore underscore time function or a time filter function. So what's actually happening there? How does this actually provide an answer if it's not really real Vertica syntax? Well it's not sufficient to just know how to manipulate data. It's also really important that you know how to operate with metadata. So information about how the data works in the data source Vertica in this case. So Grafana needs to know how time works in detail for each data source beyond doing that basic IO that we just saw in the previous example. So it needs to know how do you connect to the data source to get some time data. How do you know what time data types and functions there are and how they behave? How do you generate a query that references the time literal? And finally, once you've figured out how to do all that, how do you find the time in the database? How do you know which tables have time columns and then they might be worth rendering in this kind of UI? So Go's database standard doesn't actually really offer many metadata interfaces. Nevertheless, Grafana needs to know those answers and so it has its own plugin layer that provides a standardizing layer whereby every data source can implement hints in that metadata customization needed to have an extensible data source backend. So we have another open source project, the Vertica Grafana data source, which is a plugin that uses Grafana's extension points with JavaScript in the front end plugins and also with Go in the back end plugins to provide Vertica connectivity inside Grafana. So the way this kind of works, the plugin frameworks defines those standardizing functions like time and time filter and it's our plugin that's going to rewrite them in terms of Vertica syntax. So in this example, time gets rewritten to a Vertica cast and time filter becomes a between predicate. So that's one example of how you can use Grafana but also how you might build any arbitrary visualization tool that works with data in Vertica. So let's now look at some other examples and reference architectures that we have out on our GitHub page. For some advanced integrations there's clearly a need to go beyond these standards. So SQL and these surrounding standards like JDBC and ODBC were really critical in the early days of Vertica because they really enabled a lot of generic database tools. And those will always continue to play a really important role but the big data technology space kind of moves a lot faster than these old database standards can kind of keep up with. So there's all kinds of new advanced analytics and kind of query pushdown logic that weren't ever possible 10 or 20 years ago that Vertica can do natively. There's also all kinds of data oriented application workflows doing things like streaming data or parallel loading or machine learning. And all of these things we need to build software with but we don't really have standards to go by. So what do we do there? Well open source implementations make for easier integrations and applications all over the place. So even if you're not using Gravana for example other tools have similar challenges that you need to overcome and it helps to have an example there to show you how to do it. Take machine learning for example. There's been many excellent machine learning tools that have kind of arisen over the years to make data science and the task of machine learning a lot easier. And a lot of those have basic database connectivity but they kind of generally only treat the database as a source of data. So they do lots of data I.O. to extract data from a database like Vertica for processing in some other engine. And we all know that's not the most efficient way to do it. It's much better if you can leverage Vertica's scale and bring the processing to the data. So a lot of these tools don't take full advantage of Vertica because there's not really a uniform way to go do so with these standards. So instead we have a project called Vertica ML Python and this kind of serves as a reference architecture of how you can do scalable machine learning with Vertica. So this project establishes a sort of familiar machine learning workflow that scales with Vertica. So it kind of feels similar to like a scikit-learn kind of project except all of the processing and aggregation and heavy lifting and data processing happens in Vertica. So this makes for a much more lightweight scalable approach that you might otherwise be used to. So with Vertica ML Python you can probably use this yourself but you could also see how it works, right? So if it doesn't meet all your needs you can still see the code and customize it to build your own approach. We've also got lots of examples of our UDX framework and so this is an older GitHub project we've actually had this for a couple of years but it is really useful and important so I wanted to plug it here. With our user defined extension framework or UDXs this allows you to extend the operators that the Vertica execute when it does a database load or a database query. So with UDXs you can write your own domain logic in a C++, Java, Python or R and you can call them within the context of a SQL query. And Vertica brings your logic to that data and makes it fast and scalable and fault tolerant and correct for you so you don't have to worry about all those hard problems. So our UDX examples demonstrate how you can use our SDK to solve interesting problems and some of these examples might be complete, total usable packages or libraries so for example we have a curl source that allows you to extract data from any curable endpoint and load it into Vertica we've got things like an ODBC connector that allows you to access data in an external database via an ODBC driver within the context of a Vertica query all kinds of parsers and string processors and things like that. We also have more kind of exciting and interesting things where you might not really think of Vertica being able to do that, like a heat map generator which takes some XY coordinates and renders it on top of an image to show you the hotspots in it. So the image on the right was actually generated from one of our intern gaming sessions a few years back. So all these things are great examples that show you not just how you can solve problems but also how you can use this SDK to solve neat things that maybe no one else has to solve unique to your business and your needs. Another exciting benefit is with testing. So the test automation strategy that we have in Vertica Python, these clients really generalizes well beyond the needs of a database client. Anyone that's ever built a Vertica integration or an application probably has a need to write some integration tests and that can be hard to do with all the moving parts in a big data solution. But with our code being open source you can see in Vertica Python in particular how we structured our tests to facilitate smooth testing that's fast to deterministic and easy to use. So we've automated the download process, the installation and deployment process of a Vertica community edition and with a single click you can run through the tests locally and part of the PR workflow via Travis TI. We also do this from multiple different Python environments. So for all Python versions from up to 3.8 for different Python interpreters and for different Linux distros we're running through all of them very quickly with ease thanks to all this automation. So today you can see how we do it in Vertica Python. In the future we might want to spin that out into its own kind of stand-load test-bed starter project so that if you're starting any kind of new Vertica integration this might be a good starting point for you to get going quickly. So that brings us to some of the future work we want to do here in the open source space. Well, there's a lot of it. So in terms of the client stuff for Python we are marching towards our 1.0 release which is when we aim to be protocol complete to support all of Vertica's unique protocols including copy local and some new protocols invented to support complex types which is a new feature in Vertica 10. We have some cursor enhancements to do things like better streaming and improved performance. Beyond that we want to take it where you want to bring it. So send us your requests. In the go client front this is about a year behind Python in terms of its protocol implementation but the basic operations are there but we still have more work to do to implement things like load balancing, some of the advanced odds and other things. But there too we want to work with you and we want to focus on what's important to you so that we can continue to grow and be more useful and more powerful over time. Finally this question of well what about beyond database clients what else do we want to do to support open source? If you're building a very deep or robust vertical integration you probably need to do a lot more exciting things than just run SQL queries and process the answers. Especially if you're an OEM or a vendor that resells Vertica packaged as a black box piece of a larger solution you might have to manage the whole operational life cycle of Vertica. There's even fewer standards for doing all these different things compared to the SQL that's a well established pattern and there's lots of downstream work that that can enable. But there's also clearly a need for lots of other open source protocols, architectures and examples to show you how to do these things in lieu of real standards. We talked a little bit about how you could do UDXs or testing or machine learning but there's all sorts of other use cases too. That's why we're excited to announce here Awesome Vertica which is a new collection of open source kind of resources available on our GitHub page. If you haven't heard of this awesome manifesto before I highly recommend you check out this GitHub page on the right. We're not kind of unique here but there's lots of awesome projects for all kinds of different tools and systems out there. It's a great way to establish a community and share different resources whether they're open source projects, blogs, examples, references, community resources and all that. It is an open source project so it's sort of an open source Wiki and you can contribute to it by submitting yourselves a PR. We've seeded it with some of our favorite tools and projects out there but there's plenty more out there and we hope to see more grow over time. Definitely check this out and help us make it better. With that I'm going to wrap up. I wanted to thank you all. Special thanks to Sitingren and Roger Huber who are the project leads but we've already been contributing stuff. This has already been going on for a long time and we hope to keep it going and keep it growing with your help. If you want to talk to us you can find us at this email address here but of course you can also find us on the vertical forums or you can talk to us on GitHub too and there you can find links to all the different products I talked about today. With that I think we're going to wrap up and now we're going to hand it off for some Q&A.