 Hello and welcome. My name is Shannon Kemp and I'm the Chief Digital Manager of DataVersity. I would like to thank you for joining the DataVersity webinar. Focus on your analysis, not your SQL code, sponsored today by Altrix. Just a couple of points to get us started. Due to the large number of people that attend these sessions, you will be muted during the webinar. For questions, we will be collecting them by the Q&A in the bottom right-hand corner of your screen. And if you'd like to tweet, we encourage you to share highlights and questions via Twitter using hashtag DataVersity. If you'd like to chat with us or with each other, we certainly encourage you to do so. Just click the chat icon in the upper right-hand corner for that feature. And as always, we will send a follow-up email within two business days containing links to the slides, the recording of this session, and additional information that we've posted throughout the webinar. Now let me introduce to you our speakers for today, Beth Narish and Dan Hilton. Beth is a Senior Product Marketing Manager for Altrix. She has over 10 years' experience in finance, market research, and analytics software with roles in product marketing, channel marketing, and solutions marketing. A strong communicator with excellent interpersonal skills, Beth exhibits a high-energy, can-do attitude with a unique ability to manage cross-functional teams, drive collaboration, and deliver on a unified vision. Dan provides solutions and service focus on analytics, big data processing, predictive modeling, and server integrations. He has delivered analytic projects across many industries including real estate, healthcare, retail, marketing, and financial services. His continued role as a Solutions Architect focuses on server implementations, application architecture, and data governance. And with that, let me turn it over to Dan and Beth to get us started. Hello and welcome. Thanks so much, everybody, for joining us today. I'm Beth, and I'm going to go ahead and kick us off, and we'll start first with the agenda. Today we're going to talk through the historical approach that exists within organizations to analytics and how SQL fits in with that, as well as some challenges that we may face. We'll talk about who Altrix is, as well as what our platform is delivering, and then walk through some benefits of a workflow. And then I'll pass it over to Dan, who will actually show you the product in action, and do a demo for you so you can see Altrix in action. Next slide, please. So the traditional approach to analytics that exists within organizations is often frustrating for many parties involved at all touch points of the process that you can see on the screen. For GBAs and IT, for statisticians and data scientists, and also for analysts who sit in various lines of business. This process often starts when an analyst is looking to create a report to help solve a business problem. They need a specific data set, so they'll submit a request or a ticket for these data pools or these data sets. And these data pools often are turned around by a very busy IT team, and oftentimes an overworked database analyst. But what often happens right when that data set is about to be delivered to the analyst is something needs to change. That analyst wants to look at something different. The client requests that a report looks a little bit different than it did last month. And this kicks off the whole process to start all over again at the very beginning. So it's frustrating for everyone involved. There's too many people that need to touch the data using too many different tools with too many steps in the process. And from the analyst perspective, it's not only affecting their performance, but also their job satisfaction. Next slide, please. So what tools are being used for analysis within organizations? Well, as you may imagine, spreadsheets like Excel are still by and far the preferred tool to do this analysis. But in a survey that we're quoting here and was published by the Harvard Business Review, actually 37% of respondents indicated that they're using low level SQL queries for their data analysis. So what exactly does this mean? It means that if you're not a SQL coder by trade, but you happen to be sitting in an analyst position in an organization, you might be teaching yourself how to write SQL code. So if that's the case, this might not be something that you want to do. It might not be something that you're good at. And writing code may not even be what you're interested in doing in your job. But most of us know analysts, or we are analysts, and we know that they'll do whatever it takes to get the job done. So they're finding ways to make this work for them. They're able to, with writing the SQL code, be more efficient, get the access to the data that they need faster, and to eventually create the insights that they need to help them solve a business problem. So why SQL? Next slide, please. Thank you. SQL is a standard language for relational database management systems. It's been the primary language for actually over 40 years. This primary language is for Oracle, for SQL Server, for MySQL, for DB2, and honestly probably 100 different databases. So these analysts who are resilient are doing what they do best. They're solving problems, and they're going to their IT counterparts, or the database analysts who historically have given them these data files or these data sets. And they're asking, if you will, for a lesson in coding 101. So they're requesting the SQL scripts that this DBA has created, and the analyst is taking that SQL script, the existing script, updating the code, and making it work for new data pools that they need to do additional analyses. So many of these analysts are using SQL to query these databases and to export that data to Excel for analysis purposes. Now, you may be wondering about NoSQL. It has gained popularity with Hadoop, with Mongo, and with others, but by and far, at the end of the day, SQL is stoking. So that said, there are challenges that this SQL coding can bring to the table for analysts who aren't necessarily savvy in writing this code. Can you go to the next slide, please, Dan? As I mentioned earlier, analysts who are teaching themselves SQL or learning it from their IT colleagues aren't necessarily trained in writing this coding language. They often don't have formal education in this space. They might not even be necessarily interested in writing their code. But even if they've been able to do one data pool themselves, SQL is like so many other things. It takes practice in order to get good at it. But even more so, it takes practice to keep up on your skills. So for example, if you look to do a quarterly report in Q4, and you use some SQL code from a DBA and were able to do that export of data yourself, if you have to run the same query again, but with Q1 numbers this time, if you haven't touched that code or touched any type of SQL code for three months, it's really hard for an analyst to keep their skills sharp over this period of time. And stepping a little bit broader outside of the IT, the DBAs, and these analysts think about the other colleagues that are interested in the data outcomes within your organization. Oftentimes, these colleagues can't pick up SQL code and read it or to understand what this code is meant to do. They don't necessarily understand transformations that were made to the data and when data changes, which we all know is frequent, managers or business leaders who don't know how to read SQL may not be able to repeat the steps that you did in your process. So they are by chance able to understand what you did with your analysis if they're trying to replicate that process, if they have to do any sort of troubleshooting at each step of the query. It's something that can be very tedious and often takes a lot of time. So how can Altrix help? Some of you may be familiar with us, some of you may not, but we are a leading platform for self-service data analytics, and we are giving analysts the ability to prep, blend, analyze all of their data in a workflow that is repeatable to go out to the broader organization with the ability to deploy and share analytics and scale, and finally to circumvent, if you will, that historical approach that does exist within organizations that takes a lot of time and to give the analyst the ability to deliver those deeper insights in hours not weeks. So in terms of our overall platform, how are we delivering these capabilities? So I mentioned that we're a repeatable workflow to prep, blend, and analyze data. The very first thing that we want you to be able to do is to input all of the relevant data that you need to solve your business problem, regardless of where that data sits in the format it's in. So whether it's structured, unstructured, or semi-structured, if it's sitting in a SQL server, in Hadoop, in a data warehouse, anywhere in the cloud, or even sitting on your desktop in Excel, the ability to access all of this data, pull it together, and to create your analytic data set, we give you the ability to join all of those different sources together based on common fields. Now, if all of those sources aren't enough and you're looking for some third-party data to supplement the information that you do have access to, we do give you the ability to enrich your data with demographics from either the Census or Experian with business data from Dun and Bradstreet, and also with spatial data that's provided by TomTom. Now, on the screen you can see what the actual workflow looks like and you'll be able to see this more specifically once Ian goes through the demo. But if you're looking to now take the next step in your analytics process, which is actually doing the analysis, it's all done in the same drag-and-drop workflow with no coding required. So if you're looking to do statistical analysis, predictive analysis, or spatial analysis, all within the same one visual workflow. And the last step of the process is that the insights that you're creating as an analyst really need to be shared without your broader organization. With this legacy approach to analytics, reports often have been created in a totally separate environment. But with Alteryx, this is really just one more step at the end of the workflow, giving you the ability to create static reports in Alteryx, to export data for visualization to click or tableau, and also to create analytic applications that will allow other people within your organization to actually run the analysis themselves. And as I mentioned, Ian, we'll show you this in detail later in the webinar. So now that you understand who we are as an organization, what the Alteryx platform can help deliver, how specifically can Alteryx help people that are writing code in SQL? Well, the first way that we can help is really with transparency. SQL code is not easy for everyone to understand, especially when you step outside of our line of business analysts who are learning it, and the ITs and DBAs who are really experts in this area. When we go to other parts of the organization, managers, decision makers, business leaders, it's not easy for them to pick up and understand SQL code. And oftentimes, they do have questions to understand what your analytics process was. Unfortunately, delivering transparency in SQL is very challenging, because it requires that you, the analyst, as well as these decision makers, can look at and understand the SQL code. Unfortunately, it is not as easy as just sending someone your code and assuming that they'll know what's happening in your programming. In addition, if there happens to be something within that code that doesn't work, digging through it to understand where the code is broken is very tedious and time-consuming. With Alteryx in terms of our workflow and just by nature a very visual platform, it's really easy for not only you, but also decision makers and business leaders within your organization, not only to one, translate your process, but two, to replicate your process if necessary. So not only can you see what you've done, others within your organization can see and understand it as well. Next slide. Thank you very much. Secondly, we just want to touch on transformations. Generally speaking, becoming efficient and proficient in SQL is something that overall can take years of training. The ability to extract, to transform and to load the right dataset in a timely manner to help you understand or impact that business decision means that you not only need to be fast, but you need to be accurate in terms of the delivery of your code. So not only do you need to be able to write your code quickly, you want to write code that also runs very quickly on the back end. And let's be honest, from an analyst perspective sitting in the line of business, not many analysts want to write or sift through hundreds or thousands of lines of code. And even for the ones who have become very savvy at taking the code from DBAs, adapting that code and rewriting it, it might still take hours, if not days, to get the code updated, running, tested and documented. Contrary to that, with all tricks, extractions and transformations really have the ability to happen seamlessly without writing any code. And it really is just as easy as dragging and dropping your tool onto a workflow and adjusting parameters. One of our customers, BAE, we have a customer testimonial on our website, was using SQL code to create dashboards for human resources. They ended up using Alteryx and they explained to us that the change management and the code updating process within Alteryx took them only seven and a half minutes. So the analysts who was specifically responsible first for the SQL code and then for using Alteryx indicated that not only was he very impressed but his broader team was very impressed in terms of the time that it took them to do this and he himself was really excited to not only spend less time writing SQL code but actually to be able to spend more time at home. And in terms of debugging, testing and prototyping, one of the biggest struggles that analysts face in writing SQL code is just getting that code to work. There's no auto correct that's in SQL, so if you have a period or a comma that's in the wrong location, it won't be caught automatically and it can end up making your whole script fail. So if you can kind of think of this typical prototyping scenario, you write your SQL statement, you wait for it to run, only to realize that the output isn't giving you the data set that you want. And remember, this process is an iterative one. So you'll write it again, run it again, wait again, rinse and repeat, and it sometimes can take you hours if not days to get you the output that you're actually looking for when SQL isn't your coding language of preference or you're sitting in an analyst role. In contrast, a workflow within Altrix makes debugging, testing and prototyping so much more efficient, giving you the ability to understand exactly what is happening at each stage of the process transparently. So if you're building out a workflow and you notice an error, it's really easy to go right to that point in the workflow and make the change without starting over. Another customer quick example here is AAA, also a video testimonial that's on our website. And what this customer indicated that he loves about Altrix is he no longer dreams about the lines of code in front of his eyes when he sleeps, looking for that one code that's broken in SQL. Because with Altrix, he can find everything. He knows where he is in the process and no documentation is needed because not only can he look at the process, but everyone else can see it right in front of them as well. And the last two points to make are really about accessibility and flexibility. From the accessibility perspective, if you think about how much data has changed over the past 10, 15, 20 years, a lot of data is now unstructured, social data, machine data, log files. And so a lot of organizations are moving to no-SQL data warehouses to store and analyze this information. But for analysts who are learning SQL, it was really developed in the era of RDBMSs. Data stored in rows and columns, right? So today's non-relational databases, like Mongo, Hadoop, and Hive, don't store data in this structured manner. So bottom line is if you're using SQL to access structured databases, using it to access unstructured data just won't work. And in contrast, this is really the very first step that we talked about with Altrix earlier. We're not discriminating when it comes to data structure or data location. We want to make it really easy for you to connect to and to access data regardless of its format, regardless of its location. So the accessibility of Altrix is really vast. And finally, flexibility. If anyone's familiar with writing, programming, and coding, leniency is probably not a word that you would use to describe this structure. If a code is not written correctly, it's not going to run. A process is not going to complete. So for some of these analysts and lines of business who have learned enough SQL to be dangerous, if you will, flexible probably isn't the word that they'd use to describe this code writing process. So it might be easy for them to request code from their DBA or their IT colleague modifying the code to execute another data pool to access a file with different specs isn't always straightforward or simple. A lot of times it's multiple attempts at trial and error that's taking up time that they just don't have. And a workflow environment like Altrix, it's flexible enough to not only allow you to develop a new workflow and a new analytics procedure, but we also do give you the ability to leverage existing SQL statements that you may have written, and you can implement those into your analysis as well. And you will see that as part of the demo that Dan is going to now. I keep mentioning the workflow, so I'm going to pass it over to him to actually show you a demo of how easy it is to use and how everything works together. Great. Thanks, Beth. All right. So let's dive right in to some examples and kind of see what this development process looks like. So when you open up Altrix Designer, you'll see an interface that looks like this. It's a drag and drop interface, as Beth described. There are a variety of tool categories that you'll notice along the top here. So we've got a whole bunch of different tools for data preparation, data blending, combining, joining, parsing, and then we have some more advanced capabilities as well around predictive data investigation and profiling, some spatial analytics, and things like that. So let's start with some kind of simple examples that relate to concepts that you'll be familiar with as a SQL user. I'm going to go to my in-out category and look for a tool called input data. I drag the tool directly from my toolbar into my workspace area here. With this tool, I can pull in a huge variety of data file types, so all these possible data types from file. I can also connect to databases. So in this case, I'm going to connect to a SQL server. So I'm going to provide a name for my connection. I'll just call it the Ants database. And I'm going to provide a SQL server machine name. And I'll just test that my Windows authentication will get me in, so it looks like it's successful. And I'm going to select my default database. So now I get an interface that will allow me to define some type of a query and to see all the tables on that database. So initially, I'm going to look at a customer's table that I know I have in here. So I can browse among my tables and pull in customers. I'll just pull in all the fields. So there's a visual query builder that you're probably familiar with in the last SQL management studio or some other similar type of tool. So I've defined my connection, and now I can just pull my data into my workflow by pressing this Run button at the top. Press Play. I see in my results pane here at the bottom left that my process worked, so it tells me a few things about what happened, how many records I pulled in, how long it took. I can also visualize my data right off the bat. So we have a tool here in Out category called Browse. I pull that tool into the canvas area, and it connects up automatically to my input tool. Now when I press Run, I get to see the data that I pulled in. So I have a little results pane here on the bottom right that gives me a look at all the data that I'm pulling in. So it looks like this customer's database contains some information about our customers, a variety of fields. So we've got 10 fields. We can do a little bit of profiling on the data that's coming in. So as we click on columns here, we can visualize some profiling information. So this is kind of useful for understanding if there's any types of data, quality problems, understanding distribution of data, understanding data types and things like that, identifying empty fields and things of that nature. All right, so we've pulled some data in from our table, and now we're going to do a couple of simple operations. I'll go to my preparation category. We've got a variety of tools in this category for kind of common data preparation activities, things like filtering, generating formulas, sampling, things like that. One of the tools that we use very commonly is called a select tool. It allows us to do things like renaming, so I could call this field customer ID. So it's basically equivalent to a SQL as. It allows us to change data types. So this would be kind of analogous to a SQL cast or convert. It allows us to remove columns from our data stream at this point in the process. So from my example here, there's a lot of these columns that I don't really need, so I'm going to turn these off. And now when we use our browse tool, we can visualize the data at each point. So here I have a browse tool that shows me the tabular data as it comes in, and profiling information if I need to see that. But then we've done this transformation step where we've modified our data a little bit and removed some columns. So now I can visualize the data at that point in the process. And now we can add another tool to make our data distinct. So what we can do, maybe we want to get a distinct list of countries. We'll use a tool called Unique. And we'll Unique on Country. Press Run again. And now we see in our results that we've got a column that is Unique Country Values in the Country column. One of the things to notice is that the Unique tool outputs the Unique data out of one side and the Duplicate data out of another side. This gives us a lot of transparency in the process in terms of fallout. And we'll see this paradigm in other places as well. We haven't lost any records. We can treat our fallout records here if we need to, to do additional processing on them. We can basically split our data stream into multiple chunks and treat different sets of data differently. And then we'll talk about how we combine those streams in a minute. So just kind of a simple example showing a query that kind of looks in SQL, you would see a query that was something like this, a select distinct customers from our customers table. We would produce a list like this where basically accomplishing the same thing using some other tools, including this Unique tool. So let's look at another example. In this example, I'm going to pull some data from a table in my database called Orders. So I could go through that process of finding the SQL server and finding the table again. But I also, I can reuse a connection that I've just made. So I set up this Dan's database connection previously. Now I can just reuse it. I don't have to go through that setup process again. So I'm going to look for my orders table. I'm going to pull in all the fields. Again, I'll browse the data right off the bat to make sure I understand what it looks like. So we've got, it looks like we've got an order ID. We've got a customer ID. We've got an order amount and a few other attributes about the order. And we can do some different types of filtering activities that are analogous to different SQL functions as well. So we have a tool called Filter that comes from the preparation category. I'll pull that in. You'll notice that the Filter tool has two outputs, a true and a false. So we can define some type of an expression that we want to use to break records into two different streams. So maybe what I want to do is I want to, I want to look at my fields and I want to say my order date should be greater than or equal to some date. Maybe let's say 2016-1201. And then we also want it to be less than a certain date as well. So we want it to fall inside of a certain range. So now we'll inject our order date column and we can say less than or equal to some other value. Maybe 2016-12-31. That was just press play again. We see that 76 records out of the 1,000 that we started with basically met that criteria. So this is equivalent to a SQL between. I have the ability to annotate each tool as I go to kind of provide documentation. You'll notice that there's documentation, these little white boxes below each tool, these are automatically supplied and they give you some idea about what the expression or the input data is. So here I can annotate this and say this is equivalent to like a SQL between. We've also got a variety of tools in the documentation category that can allow us to label different sections of our workflow we call these tool containers. So this one I could label as select. This one I could label as my between. And you'll notice that we get the fallout from this filter process as well. So we get 76 records that met the criteria. We get a 924 that fall out and we can review each of those records by looking at the results. We could do a similar process with filtering maybe equivalent to an in statement. So let me just show real quick. I'm going to connect again to my customers database or table here. I could use filter in the same way that I would use a SQL in. Maybe I just want to grab my customers that are in a certain country, United States or United Kingdom. So here's my in operator. Now when I run this, look at the results. It looks like I've got 800 some odd records that are in the U.S. and the U.K. and then my fallout records are everywhere else. So there's a lot of transparency in our process because we understand fallout at every point that we're producing fallout. And if we had to, we could treat that differently and it's very quick to troubleshoot fallout. So if we weren't expecting some particular type of condition, we can see that very clearly in our browse process. And using this labeling mechanism for the documentation helps us produce a workflow that is reusable. It's clearly understood. We have this visual paradigm where you can see the data flowing through the process and it's organized in a way that a new person could come along and pick it up and kind of have a sense for what piece of the process is doing what. So we've done some simple filtering. We've done some kind of simple select as types of operations. Let's talk a little bit about blending data. We have a category called join that allows us to do a variety of things, things like just simple joins, uniting data, so stacking records on top of one another, upending fields. So that is kind of like a equivalent to a cross join. We also have some capability to do things like fuzzy matching or some more advanced types of joining. So I'm going to again pull in my customers and I'll also pull in orders. And I'm going to join these two data sets on a key. From the previous example, we identified that there was an ID that was in common between these two sets. So what we could do is a simple join process and equivalent to a SQL inner join where we can point to the column from each source that is our key. So on the left, here we have one called country. On the right, I'm going to actually use a different database for this. I'm going to use a distributions table and I'll just browse this real quick to make sure I understand the format of the distributions. Okay, so we've got a distribution center ID, a country and a name and I've still got my customers. I can join my customers and my distribution centers on some column in common, some key. So in this case, I'll join by country and I see records that did join according to an inner join, a few records that fall out on either side. So these records coming out of this input are present in this data source, but they did not join to this data source. These records were present in this data source and did not join, so we don't have any records there. So we had no fallout on the right so I'm just going to label this as inner join. We could do a similar type of operation to accomplish a SQL cross join. So for this, I'm going to use a tool, that's actually a little different, it's called a PEND. PEND is kind of a multiplicative effect cross join. I kind of think of it as kind of a dumb join. It's just going to put all data from both sides together and we'll browse the result. A thousand records coming out of our customers table, we had five records coming out of our distribution centers and then our resulting data has 5,000 records. A quick sample of some of the tools in the join category. So now let's look at a little bit more complex example. Let's think about actually trying to sell a more standard type of use case that you might see in your business. Maybe we have a real business problem where we'd like to take our customers and take our orders and we want to determine what non-U.S. countries have the highest average order size. With Alteryx, we can rapidly prototype and test this process in a way that's kind of understandable and accessible to both developers that may be more used to coding and analysts that are familiar with data but maybe they're not developers or coders. So we can build a process that answers that question fairly quickly. I'm going to again start with my input data tool. First, I'm going to find customers and then I'm going to find orders and I'm going to limit some of my customers. So I'm going to use a filter tool again and what I'm going to do here is I'm going to say where my country does not equal the United States. So we're looking for everything outside of the U.S. and I'll just test that real quick. So we've got 254 records outside of the U.S. and you see that real quick here. Looks good. And next step is we want to join together these two data sets. So I'm going to use that join tool again. I'm going to take my non-U.S. records and all of my orders. I'm going to join by customer, I'm going to join by ID on the left side and customer ID on the right side. Test my process. Looks like we've got some fallout on all sides. So remember before I noted that we can deal with the fallout of each of the, we can deal with the join data or either piece of the fallout. And suppose that we want to keep 100% of customers whether or not they had an order in the order's database. We looked previously at how to create an inner join but we could just as easily do like a left outer join. So we have an additional tool in our union category or our join category called union. I can drag this in and take my joined results and my left fallout results and send them both into a union tool. So now we've got the equivalent here of my left outer join. I can label my process as I go along. I can also use these tool containers to kind of provide some structure as well. People coming after me can understand how this process works. So I've done my left outer join. Next I want to do something a little more complicated. I want to do some type of a roll up. So I have a transform category here and one of the tools in this category is called summarize. I'm going to use this to do kind of like a group by operation. So I'm going to group by country. Remember our ultimate goal here is to determine average order size by country. So I'm going to find my country column and I'm going to say group by and then I want to average the total amount for my orders by country. So I'm going to add an aggregation operation here. I want to average my total order amount to come up with a country level order average. I can test my result here real quick. So it looks like we've got a unique list of countries and then we've got order averages per country. And we could filter this out a little bit more. I suppose we want to have another type of sequel process similar to like a halving clause. What we could do is supply another filter and we could say we want our average total amount to be greater than or equal to some threshold. In our case let's say it's a thousand. And I'll document this just to make sure it's clear. And I'll say that this is basically our group by aggregation function. And then this bit is equivalent to a sequel halving. And we're almost done. We've almost answered our question here. We've got our unique countries. We've got our average order amounts. Now let's just clean this up a little bit. Let's sort it by the average amount. So I'm going to use a sort tool from the preparation category. I'm going to sort by average total amount descending so that the highest one is first. And we basically have our answer now. And here's our answer. We've arrived at our answer with a documented, repeatable, and accessible workflow that can output to a wide variety of other formats. So from here now that I've got my answer I can use an output data tool, which is very similar to my input data tool. I can write to any of these file types. So you'll note all kinds of Microsoft Access, Excel, School Light, Tableau. Let's write a Tableau data extract here. So I'll call it reportdata.tde. I could also write to some, I could write to multiple outputs at once. So maybe here I want to write to an xlsx file as well. And I'll give it a sheet name. We're on this process. We note in the notice in our log here on the bottom left it tells us that these files were written. So there's a lot of flexibility in writing to files. We could also just as easily write back to our SQL Server database or some other type of database we could write to Oracle or Hadoop or something else. We could also, I suppose our process was actually bigger than this. We could extend what we've produced here and incorporate predictive analytics, any type of predictive models that come packaged with our product. We could include some spatial analytics if we wanted to maybe use spatial data or consume some spatial data for store locations or trading area creation or something like that. We've also got a variety of enrichment data sets that we have access to that can pull demographic data and enrich the data that we already have for our customers. So that is it for the demo. Beth, do you have some final thoughts around these concepts? Yeah, I think before we move into Q&A, just kind of to summarize, I think at the end of the day, Dan definitely showed you the Altrix workflow, but we also saw a little bit of SQL in terms of how the SQL code would be written. So I think kind of just to summarize our points as well as the demo, oftentimes analysts are tasked with writing hundreds, even thousands of lines of code to help them build out a data-earned analytics process and whether or not you're a savvy SQL coder or not. It's definitely something that can be time-consuming to test and debug, which we talked through earlier. But also when you're kind of thinking in terms of your broader organization, translating that process so people understand the steps that you take sometimes can be challenging if someone is not familiar with SQL code. So instead of knowing the ins and outs of some of the statements that Dan went through, I think we talked about select, we talked about group by, we talked about join and distinct with Altrix rather than focusing on the code. Really what we're asking you to focus on is to know your data and to improve your analysis. So if you look at the next slide, which has a visual screenshot of our workflow, our goal is really to empower you as the analyst with the ability to generate this full analytics process in a drag-and-drop workflow environment. So whether or not you can create simple SQL queries if you're looking to create a more complex process that you would have handed off to a SQL developer or someone in the IT team, you actually with Altrix have the ability to create those processes yourself. So as an analyst, it's giving you the ability to be more efficient with your time. It also is helping generally the IT team to alleviate some of the backlogs for those complicated SQL queries since you as the analyst have the ability to do it yourself. And at the end of the day, again, the kind of real value that we talked about during deeper insights in hours, not weeks. So I think Dan did a great job at the demo at kind of showing you how you would do that within Altrix. But at the end of the day, really the ability to create this process that's transparent not only to the SQL coder to the business analyst, but also to decision makers across the organization is really what we're trying to help deliver. And we did want to leave you with a few kind of final assets in terms of where you can go next for more information. If you go to the next slide, please, Dan. We do have a brand new website that we're launching later this afternoon, and we're calling it the Altrix translator. So it will be available this afternoon at altrix.com slash altrix-for-sql. And what we're really trying to show you here when we launch that, which really kind of aligns with the webinar that we did today, is if you know what your SQL code looks like, you've created it. We want to help you be able to translate that if you will into what it would look like in Altrix. So when this goes live this afternoon, you'll see on the left-hand side what the SQL code looks like. And then on the right-hand side, the actual tools that you would use within Altrix to do the same type of process. And the other point that we wanted to leave you at is to check out our product training website because we do actually have a pre-recorded session that talks specifically to the most common SQL processes that analysts run. And it's about a 62, I think it's an hour-long session that will kind of walk you through more with a trial download what the process will look like when you're building it out in Altrix. So you can pause it if you need to pause it at certain points and really help you understand how that workflow will be built. So they're great resources if you want to check those out. And I think I'll turn it over, Shannon, to you because I do know that we had some questions that were coming in. Yes, lots of great questions coming in. And just to answer the most commonly asked question, just a reminder, I will be sending a link to the slides and to the recording of this session within two business days, so by end of day Thursday for this webinar to all registrants. And just to dive right into it here, Dan and Beth, thanks so much for this great, it's definitely inspired a lot of good discussion here. A very direct question, can Altrix connect to Workday? I don't think so. So that's an API. Is that an API-based thing? You know, I can also attend you to expand a bit, certainly. So I believe that there is an API for it. One of the pieces that we have built into the product is a tool called the Download Tool that allows you to make HTTP requests. So that tool is here. Let me just search for it. Download Tool. This allows you to pass some type of request to an API and receive the response in the middle of your stream. So we have a series of connectors here that we've packaged with the product. So things like we have an Amazon connector, Marketo, Google Analytics, Salesforce, and SharePoint and things like that that are kind of standard for APIs that are outside of this package list. You can also, you can just code your own connectors with our Download Tool. We also have other clients that have built connectors to other services beyond this list that are available for free download from our public gallery. So you can add supplemental content into your Alteryx designer from our public gallery based off of development that other customers have done to address, like specific connectors and specific data types of problems that we haven't packaged within the product. All right. And you know, I'd be surprised if this question didn't come up. It's one of the hottest topics in our community. How does Alteryx manage metadata? Can it generate metadata for targets and store these in a metadata management system? Yes. So we have some tools that are built into the process related to metadata. So one of those is called Field Info. Field Info can take metadata and turn it into data. So let's see how this works. I'm pointing it at my order database. It looks at the incoming data. It tells me the field name, the incoming type and size, and then also some information about the source location. So you can do all kinds of things with this data. Oftentimes you'll find that there are multiple ways to accomplish a certain thing with Alteryx. The product is very extensible and that we provide the real fundamental building blocks of data work. And you can kind of craft whatever solution you need to give in those fundamental building blocks. So we have a lot of customers that will use a tool like Field Info to generate some type of validation report or to interface with some other type of metadata management tool as part of the output. So you may have some outputs that are metadata related and you may have some outputs that are actually data related. So what protections are available for our personally identifiable information? For example, can data be masked, whether reversibly or not? We don't really have anything built into the product to deal with obfuscating or masking of data. This is a desktop product. Alteryx Designer is a desktop product. We do have a variety of functions that you can access through the formula tool. So we have this library of functions available here. Functions related to string operations including hashing and other types of string operations around stripping or regex, things like that. So you could easily build some type of process with a series of formulas to maybe hash certain types of data and then output that hashed file. Let's just say we had a column called Social Security Number and you could hash those values, output it back to SQL or to a TD or something else so that downstream users wouldn't ever see the actual values. All righty. And does Alteryx have in-database capabilities? Yeah, that's a good question. It's kind of a hot topic these days as more database vendors are providing more capabilities around in-database processing including data modeling and predictive modeling. We do have a category called in-database. These tools are intended to be used kind of as a group and what they do is they each add to a SQL query that gets executed at the end of the in-database process. So let me just mock up an example here. I would connect inDB where I point to a particular location and then I can do some kind of basic operations. We have a subset of functionality that's similar to functions that are available for non-in-database operations. I have the ability to write in-database, so I have a tool called WriteData inDB. So if I wanted to create other tables on the fly, I could do that. I could create temp tables that way. This series of tools that I've created is just resulting in one master SQL query that will get executed at runtime. At this point in time, I'm actually not requesting any records come back to me, to this application machine. All that's going to happen is one query is going to get shuttled off to the SQL server for execution. If I do want some record set, probably a filtered record set to come back to me, I can use a tool called DataStreamOut. It will request records come back to this Alteryx machine. So this is used mostly for cases where you're working with a really large amount of data and you want to take advantage of horsepower on your SQL server. You don't want to have to stream millions and millions of records across the network and then put them back into the database. So this is kind of a newer feature set and we're expanding this a lot to take advantage of some of the technology that's coming down the road in Microsoft and Oracle and some other relational databases around predictive modeling as well. I love all the specific questions coming in here. Is there a performance penalty for leaving the browse tools in the workflow? Yes, there is some performance penalty. The full data at that point in the process is written to temp file, so you will see some performance hits. Generally, you probably wouldn't want to have like browse tools all over the place if you have a large volume of data. There are a variety of different options for speeding the development cycle. One possibility is in the input tool you can limit the number of records that come in and then remove that limitation at production time. We have some ability in the runtime settings to disable all browse tools so when you're production ready you can come in and click this so that they're all still present but none of them are going to execute. I got so excited by the presentation and all the questions coming in, I just realized we are out of time but we are right at the top of the hour. We've got so many great questions. I will get these over to you. Likewise, just a reminder to all the attendees, I will send a follow-up email by end of the day Thursday with links to the slides and the recording. That will also include a contact information for you to contact Altrex for additional questions and such. Dan and Beth, thank you so much for taking the time to be with us today. It's just been a great presentation. Clearly the attendees are asking a lot of questions here which means great involvement and thanks to our attendees for being so engaged in everything we do. We do just love all the questions that come in and the engagement with the webinars themselves. With that, I hope everyone has a great day and I hope you can all attend the March 28 webinar coming up as well with Altrex. Beth and Dan, thank you so much. Sure, thanks. Thanks everyone for your time.